I'm applying an integration function using scipy.integrate to two 2D arrays. Here's the example:
from scipy import integrate
import numpy as np
def integrate_lno2(top, bottom, peak_height, peak_width):
return integrate.quad(lambda x: np.exp(-np.power(x - peak_height, 2)/(2*np.power(peak_width, 2))), top, bottom)[0]
# change row and col to test speed
row = 100; col = 100; peak_height=300; peak_width=60
top = np.linspace(100, 200, row*col).reshape(row, col)
bottom = np.linspace(800, 900, row*col).reshape(row, col)
res = np.zeros((row, col))
for i in range(row):
for j in range(col):
res[i, j] = integrate_lno2(top[i, j], bottom[i, j], peak_height, peak_width)
If the shape of 2D arrays increase, the for loop can be slow. I have found the numba integrand example, however it doesn't accept the upper and lower limit.
Like in this previous answer, you can use Numba to speed up the lambda calls that are very slow due to big Numpy overheads (Numpy is not optimized to operate on scalar and is very slow to do that). Even better: you can tell to Numba to generate a C function which can be called directly from Scipy with a very small overhead (since it almost completely remove the overhead of the slow CPython interpreter). You can also also pre-compute the division by a variable and convert it to a multiplication (faster).
Here is the resulting code:
import numba as nb
import numpy as np
import scipy as sp
factor = -1.0 / (2 * np.power(peak_width, 2))
# change row and col to test speed
row = 100; col = 100; peak_height=300; peak_width=60
#nb.cfunc('float64(float64)')
def compute_numba(x):
return np.exp(np.power(x - peak_height, 2) * factor)
compute_c = sp.LowLevelCallable(compute_numba.ctypes)
def integrate_lno2(top, bottom):
return sp.integrate.quad(compute_c, top, bottom)[0]
top = np.linspace(100, 200, row*col).reshape(row, col)
bottom = np.linspace(800, 900, row*col).reshape(row, col)
res = np.zeros((row, col))
for i in range(row):
for j in range(col):
res[i, j] = integrate_lno2(top[i, j], bottom[i, j])
The computing loop is roughly 100 times faster on my machine.
Related
I want to generate a random matrix of shape (1e7, 800). But I find numpy.random.rand() becomes very slow at this scale. Is there a quicker way?
A simple way to do that is to write a multi-threaded implementation using Numba:
import numba as nb
import random
#nb.njit('float64[:,:](int_, int_)', parallel=True)
def genRandom(n, m):
res = np.empty((n, m))
# Parallel loop
for i in nb.prange(n):
for j in range(m):
res[i, j] = np.random.rand()
return res
This is 6.4 times faster than np.random.rand() on my 6-core machine.
Note that using 32-bit floats may help to speed up a bit the computation although the precision will be lower.
Numba is a good option, another option that might work well is dask.array, which will create lazy blocks of numpy arrays and perform parallel computations on blocks. On my machine I get a factor of 2 improvement in speed (for 1e6 x 1e3 matrix since I don't have enough memory on my machine).
rows = 10**6
cols = 10**3
import dask.array as da
x = da.random.random(size=(rows, cols)).compute() # takes about 5 seconds
# import numpy as np
# x = np.random.rand(rows, cols) # takes about 10 seconds
Note that .compute at the end is only to bring the computed array into memory, however in general you can continue to exploit the parallel computations with dask to get much faster calculations (that can also scale beyond a single machine), see docs.
An attempt to find an answer from answers given till now:
I just wrote a script which is compiled from already given (by SultanOrazbayev and Jérôme Richard) answers and contains 3 functions for each numba, dask and numpy approach and measure the time spent for n number of different sized arrays.
The code
import dask.array as da
import matplotlib.pyplot as plt
import numba as nb
import timeit
import numpy as np
#nb.njit('float64[:,:](int_, int_)', parallel=True)
def nmb(n, m):
res = np.empty((n, m))
# Parallel loop
for i in nb.prange(n):
for j in range(m):
res[i, j] = np.random.rand()
return res
def nmp(n, m):
return np.random.random((n, m))
def dask(n, m):
return da.random.random(size=(n, m)).compute()
if __name__ == '__main__':
data = []
for i in range(1, 16):
dmm = 2 ** i
s_nmb = timeit.default_timer()
nmb(dmm, dmm)
e_nmb = timeit.default_timer()
s_nmp = timeit.default_timer()
nmp(dmm, dmm)
e_nmp = timeit.default_timer()
s_dask = timeit.default_timer()
dask(dmm, dmm)
e_dask = timeit.default_timer()
data.append([
dmm,
e_nmb - s_nmb,
e_nmp - s_nmp,
e_dask - s_dask
])
data = np.array(data)
plt.plot(data[:, 0], data[:, 1], "-r", label="Numba")
plt.plot(data[:, 0], data[:, 2], "-g", label="Numpy")
plt.plot(data[:, 0], data[:, 3], "-b", label="Dask")
plt.xlabel("Number of Element on each axes")
plt.ylabel("Time spent (s)")
plt.legend()
plt.show()
The result
I'm trying to learn more about the use of shared memory to improve performance in some cuda kernels in Numba, for this I was looking at the Matrix multiplication Example in the Numba documentation and tried to implement to see the gain.
This is my test implementation, I'm aware that the example in the documentation has some issues that I followed from Here, so I copied the fixed example code.
from timeit import default_timer as timer
import numba
from numba import cuda, jit, int32, int64, float64, float32
import numpy as np
from numpy import *
#cuda.jit
def matmul(A, B, C):
"""Perform square matrix multiplication of C = A * B
"""
i, j = cuda.grid(2)
if i < C.shape[0] and j < C.shape[1]:
tmp = 0.
for k in range(A.shape[1]):
tmp += A[i, k] * B[k, j]
C[i, j] = tmp
# Controls threads per block and shared memory usage.
# The computation will be done on blocks of TPBxTPB elements.
TPB = 16
#cuda.jit
def fast_matmul(A, B, C):
# Define an array in the shared memory
# The size and type of the arrays must be known at compile time
sA = cuda.shared.array(shape=(TPB, TPB), dtype=float32)
sB = cuda.shared.array(shape=(TPB, TPB), dtype=float32)
x, y = cuda.grid(2)
tx = cuda.threadIdx.x
ty = cuda.threadIdx.y
bpg = cuda.gridDim.x # blocks per grid
# Each thread computes one element in the result matrix.
# The dot product is chunked into dot products of TPB-long vectors.
tmp = 0.
for i in range(bpg):
# Preload data into shared memory
sA[ty, tx] = 0
sB[ty, tx] = 0
if y < A.shape[0] and (tx+i*TPB) < A.shape[1]:
sA[ty, tx] = A[y, tx + i * TPB]
if x < B.shape[1] and (ty+i*TPB) < B.shape[0]:
sB[ty, tx] = B[ty + i * TPB, x]
# Wait until all threads finish preloading
cuda.syncthreads()
# Computes partial product on the shared memory
for j in range(TPB):
tmp += sA[ty, j] * sB[j, tx]
# Wait until all threads finish computing
cuda.syncthreads()
if y < C.shape[0] and x < C.shape[1]:
C[y, x] = tmp
size = 1024*4
tpbx,tpby = 16, 16
tpb = (tpbx,tpby)
bpgx, bpgy = int(size/tpbx), int(size/tpby)
bpg = (bpgx, bpgy)
a_in = cuda.to_device(np.arange(size*size, dtype=np.float32).reshape((size, size)))
b_in = cuda.to_device(np.ones(size*size, dtype=np.float32).reshape((size, size)))
c_out1 = cuda.device_array_like(a_in)
c_out2 = cuda.device_array_like(a_in)
s = timer()
cuda.synchronize()
matmul[bpg,tpb](a_in, b_in, c_out1);
cuda.synchronize()
gpu_time = timer() - s
print(gpu_time)
c_host1 = c_out1.copy_to_host()
print(c_host1)
s = timer()
cuda.synchronize()
fast_matmul[bpg,tpb](a_in, b_in, c_out2);
cuda.synchronize()
gpu_time = timer() - s
print(gpu_time)
c_host2 = c_out2.copy_to_host()
print(c_host2)
The time of execution of the above kernels are essentially the same, actually the matmul was making faster for some larger input matrices. I would like to know what I'm missing in order to see the gain as the documentation suggests.
Thanks,
Bruno.
I made a performance mistake in the code I put in that other answer. I've now fixed it. In a nutshell this line:
tmp = 0.
caused numba to create a 64-bit floating point variable tmp. That triggered other arithmetic in the kernel to be promoted from 32-bit floating point to 64-bit floating point. That is inconsistent with the rest of the arithmetic and also inconsistent with the intent of the demonstration in the other answer. This error affects both kernels.
When I change it in both kernels to
tmp = float32(0.)
both kernels get noticeably faster, and on my GTX960 GPU, your test case shows that the shared code runs about 2x faster than the non-shared code (but see below).
The non-shared kernel also has a performance issue related to memory access patterns. Similar to the indices swap in that other answer, for this particular scenario only, we can rectify this problem simply by reversing the assigned indices:
j, i = cuda.grid(2)
in the non-shared kernel. This allows that kernel to perform approximately as well as it can, and with that change the shared kernel runs about 2x faster than the non-shared kernel. Without that additional change to the non-shared kernel, the performance of the non-shared kernel is much worse.
Trying to accelerate a DP algorithm on python, numba seemed like an appropriate candidate.
I'm doing a subtraction of a 2D array with a 1D array which delivers a 3D array. I'm then using .argmin() along the 3rd dimension to obtain a 2D array. This works just fine with numpy, but doesn't with numba.
Toy code reproducing the issue :
from numba import jit
import numpy as np
inflow = np.arange(1,0,-0.01) # Dim [T]
actions = np.arange(0,1,0.05) # Dim [M]
start_lvl = np.random.rand(500).reshape(-1,1)*49 # Dim [Nx1]
disc_lvl = np.arange(0,1000) # Dim [O]
#jit(nopython=True)
def my_func(disc_lvl, actions, start_lvl, inflow):
for i in range(0,100):
# Calculate new level at time i
new_lvl = start_lvl + inflow[i] + actions # Dim [N x M]
# For each new_level element, find closest discretized level
diff = (disc_lvl-new_lvl[:,:,np.newaxis]) # Dim [N x M x O]
idx_lvl = abs(diff).argmin(axis=2) # Dim [N x M]
return True
# function works fine without numba
success = my_func(disc_lvl, actions, start_lvl, inflow)
Why does not the code above run ? It does when taking out #jit(nopython=True).
Is there a work round to make the following calculation work with numba ?
I've tried variants with numpy repeats & expand_dims, as well as defining explicitly the input types of the jit function without success.
There are a few things you need to change to make it work:
Adding a dimension with arr[:, :, None]: for Numba, it looks like getitem so prefer using reshape
Use np.abs instead of built-in abs
The argmin with axis keyword argument is not implemented. Prefer using loops, which Numba is designed to optimize.
With all this fixed you can run the jitted function:
from numba import jit
import numpy as np
inflow = np.arange(1,0,-0.01) # Dim [T]
actions = np.arange(0,1,0.05) # Dim [M]
start_lvl = np.random.rand(500).reshape(-1,1)*49 # Dim [Nx1]
disc_lvl = np.arange(0,1000) # Dim [O]
#jit(nopython=True)
def my_func(disc_lvl, actions, start_lvl, inflow):
for i in range(0,100):
# Calculate new level at time i
new_lvl = start_lvl + inflow[i] + actions # Dim [N x M]
# For each new_level element, find closest discretized level
new_lvl_3d = new_lvl.reshape(*new_lvl.shape, 1)
diff = np.abs(disc_lvl - new_lvl_3d) # Dim [N x M x O]
idx_lvl = np.empty(new_lvl.shape)
for i in range(diff.shape[0]):
for j in range(diff.shape[1]):
idx_lvl[i, j] = diff[i, j, :].argmin()
return True
# function works fine without numba
success = my_func(disc_lvl, actions, start_lvl, inflow)
Find below the corrected code of my first post, that you can execute with and without jitted mode of the numba library (by removing the line that starts with #jit). I've observed a speed increase of factor 2 for this example.
from numba import jit
import numpy as np
import datetime as dt
inflow = np.arange(1,0,-0.01) # Dim [T]
nbTime = np.shape(inflow)[0]
actions = np.arange(0,1,0.01) # Dim [M]
start_lvl = np.random.rand(500).reshape(-1,1)*49 # Dim [Nx1]
disc_lvl = np.arange(0,1000) # Dim [O]
#jit(nopython=True)
def my_func(nbTime, disc_lvl, actions, start_lvl, inflow):
# Initialize result
res = np.empty((nbTime,np.shape(start_lvl)[0],np.shape(actions)[0]))
for t in range(0,nbTime):
# Calculate new level at time t
new_lvl = start_lvl + inflow[t] + actions # Dim [N x M]
print(t)
# For each new_level element, find closest discretized level
new_lvl_3d = new_lvl.reshape(*new_lvl.shape, 1)
diff = np.abs(disc_lvl - new_lvl_3d) # Dim [N x M x O]
idx_lvl = np.empty(new_lvl.shape)
for i in range(diff.shape[0]):
for j in range(diff.shape[1]):
idx_lvl[i, j] = diff[i, j, :].argmin()
res[t,:,:] = idx_lvl
return res
# Call function and print running time
start_time = dt.datetime.now()
result = my_func(nbTime, disc_lvl, actions, start_lvl, inflow)
print('Execution time :',(dt.datetime.now() - start_time))
Part of my Python program contains the follow piece of code, where a new grid
is calculated based on data found in the old grid.
The grid i a two-dimensional list of floats. The code uses three for-loops:
for t in xrange(0, t, step):
for h in xrange(1, height-1):
for w in xrange(1, width-1):
new_gr[h][w] = gr[h][w] + gr[h][w-1] + gr[h-1][w] + t * gr[h+1][w-1]-2 * (gr[h][w-1] + t * gr[h-1][w])
gr = new_gr
return gr
The code is extremly slow for a large grid and a large time t.
I've tried to use Numpy to speed up this code, by substituting the inner loop
with:
J = np.arange(1, width-1)
new_gr[h][J] = gr[h][J] + gr[h][J-1] ...
But the results produced (the floats in the array) are about 10% smaller than
their list-calculation counterparts.
What loss of accuracy is to be expected when converting lists of floats to Numpy array of floats using np.array(pylist) and then doing a calculation?
How should I go about converting a triple for-loop to pretty and fast Numpy code? (or are there other suggestions for speeding up the code significantly?)
If gr is a list of floats, the first step if you are looking to vectorize with NumPy would be to convert gr to a NumPy array with np.array().
Next up, I am assuming that you have new_gr initialized with zeros of shape (height,width). The calculations being performed in the two innermost loops basically represent 2D convolution. So, you can use signal.convolve2d with an appropriate kernel. To decide on the kernel, we need to look at the scaling factors and make a 3 x 3 kernel out of them and negate them to simulate the calculations we are doing with each iteration. Thus, you would have a vectorized solution with the two innermost loops being removed for better performance, like so -
import numpy as np
from scipy import signal
# Get the scaling factors and negate them to get kernel
kernel = -np.array([[0,1-2*t,0],[-1,1,0,],[t,0,0]])
# Initialize output array and run 2D convolution and set values into it
out = np.zeros((height,width))
out[1:-1,1:-1] = signal.convolve2d(gr, kernel, mode='same')[1:-1,:-2]
Verify output and runtime tests
Define functions :
def org_app(gr,t):
new_gr = np.zeros((height,width))
for h in xrange(1, height-1):
for w in xrange(1, width-1):
new_gr[h][w] = gr[h][w] + gr[h][w-1] + gr[h-1][w] + t * gr[h+1][w-1]-2 * (gr[h][w-1] + t * gr[h-1][w])
return new_gr
def proposed_app(gr,t):
kernel = -np.array([[0,1-2*t,0],[-1,1,0,],[t,0,0]])
out = np.zeros((height,width))
out[1:-1,1:-1] = signal.convolve2d(gr, kernel, mode='same')[1:-1,:-2]
return out
Verify -
In [244]: # Inputs
...: gr = np.random.rand(40,50)
...: height,width = gr.shape
...: t = 1
...:
In [245]: np.allclose(org_app(gr,t),proposed_app(gr,t))
Out[245]: True
Timings -
In [246]: # Inputs
...: gr = np.random.rand(400,500)
...: height,width = gr.shape
...: t = 1
...:
In [247]: %timeit org_app(gr,t)
1 loops, best of 3: 2.13 s per loop
In [248]: %timeit proposed_app(gr,t)
10 loops, best of 3: 19.4 ms per loop
#Divakar, I tried a couple of variations on your org_app. The fully vectorized version is:
def org_app4(gr,t):
new_gr = np.zeros((height,width))
I = np.arange(1,height-1)[:,None]
J = np.arange(1,width-1)
new_gr[I,J] = gr[I,J] + gr[I,J-1] + gr[I-1,J] + t * gr[I+1,J-1]-2 * (gr[I,J-1] + t * gr[I-1,J])
return new_gr
While half the speed of your proposed_app, it is closer in style to the original. And thus may help with understanding how nested loops can be vectorized.
An important step is the conversion of I into a column array, so that together I,J index a block of values.
It takes 0.02 seconds for Matlab to compute the inverse of a diagonal matrix using the sparse command.
P = diag(1:10000);
P = sparse(P);
tic;
A = inv(P);
toc
However, for the Python code it takes forever - several minutes.
import numpy as np
import time
startTime = time.time()
P = np.diag(range(1,10000))
A = np.linalg.inv(P)
runningTime = (time.time()-startTime)/60
print "The script was running for %f minutes" % runningTime
I tried to use Scipy.sparse module but it did not help. The running time dropped, but only to 40 seconds.
import numpy as np
import time
import scipy.sparse as sps
import scipy.sparse.linalg as spsl
startTime = time.time()
P = np.diag(range(1,10000))
P_sps = sps.coo_matrix(P)
A = spsl.inv(P_sps)
runningTime = (time.time()-startTime)/60
print "The script was running for %f minutes" % runningTime
Is it possible to run the code as fast as it runs in Matlab?
Here is the answer. When you run inv in matlab for a sparse matrix, matlab check different properties of the matrix to optimize the calculation. For a sparse diagonal matrix, you can run the followin code to see what is matlab doing
n = 10000;
a = diag(1:n);
a = sparse(a);
I = speye(n,n);
spparms('spumoni',1);
ainv = inv(a);
spparms('spumoni',0);
Matlab will print the following:
sp\: bandwidth = 0+1+0.
sp\: is A diagonal? yes.
sp\: do a diagonal solve.
So matlab is inverting only the diagonal.
How does Scipy invert the matrix??
Here we have the code:
...
from scipy.sparse.linalg import spsolve
...
def inv(A):
"""
Some comments...
"""
I = speye(A.shape[0], A.shape[1], dtype=A.dtype, format=A.format)
Ainv = spsolve(A, I)
return Ainv
and spsolve
# Cover the case where b is also a matrix
Afactsolve = factorized(A)
tempj = empty(M, dtype=int)
x = A.__class__(b.shape)
for j in range(b.shape[1]):
xj = Afactsolve(squeeze(b[:, j].toarray()))
w = where(xj != 0.0)[0]
tempj.fill(j)
x = x + A.__class__((xj[w], (w, tempj[:len(w)])),
shape=b.shape, dtype=A.dtype)
i.e., scipy factorize A and then solve a set of linear systems where the right hand sides are the coordinate vectors (forming the identity matrix). Ordering all the solutions in a matrix we obtain the inverse of the initial matrix.
If matlab is taken advantage of the diagonal structure of the matrix, but scipy is not (of course scipy is also using the structure of the matrix, but in a less efficient way, at least for the example), matlab should be much faster.
EDIT
To be sure, as #P.Escondido propossed, we will try a minor modification in matrix A, to trace the matlab procedure when the matrix is not diagonal:
n = 10000; a = diag(1:n); a = sparse(a); ainv = sparse(n,n);
spparms('spumoni',1);
a(100,10) = 500; a(10,1000) = 200;
ainv = inv(a);
spparms('spumoni',0);
It prints out the following:
sp\: bandwidth = 90+1+990.
sp\: is A diagonal? no.
sp\: is band density (0.00) > bandden (0.50) to try banded solver? no.
sp\: is A triangular? no.
sp\: is A morally triangular? yes.
sp\: permute and solve.
sp\: sprealloc in sptsolve: 10000 10000 10000 15001
How about splu(), it's faster but need a dense array and return dense array:
Create a random matrix:
import numpy as np
import time
import scipy.sparse as sps
import scipy.sparse.linalg as spsl
from numpy.random import randint
N = 1000
i = np.arange(N)
j = np.arange(N)
v = np.ones(N)
i2 = randint(0, N, N)
j2 = randint(0, N, N)
v2 = np.random.rand(N)
i = np.concatenate((i, i2))
j = np.concatenate((j, j2))
v = np.concatenate((v, v2))
A = sps.coo_matrix((v, (i, j)))
A = A.tocsc()
%time B = spsl.inv(A)
calculate inverse matrix by splu():
%%time
lu = spsl.splu(A)
eye = np.eye(N)
B2 = lu.solve(eye)
check the result:
np.allclose(B.todense(), B2.T)
Here is the %time output:
inv: 2.39 s
splv: 193 ms
You are witholding crucial information from your software: the fact that the matrix is diagonal makes it super easy to invert: you simply invert each element of its diagonal:
P = np.diag(range(1,10000))
A = np.diag(1.0/np.arange(1,10000))
Of course, this is only valid for diagonal matrices...
If you try with that the result will be better:
import numpy as np
import time
import scipy.sparse as sps
import scipy.sparse.linalg as spsl
P = np.diag(range(1,10000))
P_sps = sps.coo_matrix(P)
startTime = time.time()
A = spsl.inv(P_sps)
runningTime = (time.time()-startTime)/60
print "The script was running for %f minutes" % runningTime
Now you can compare with your matlab script.