Dask Distributed: Reducing Multiple Dimensions into a Distance Matrix - python

I want to calculate a large distance matrix, based on a higher dimensional vector. For instance, I have 1000 instances each represented by 20 vectors of length 10. The distance between each two instances is given by the mean distance between each of the 20 vectors associated to each vector. So I want to go from a 1000 by 20 by 10 matrix to a 1000 by 1000 (lower-triangular) matrix. Because these calculations can get slow, I want to use Dask distributed to block the algorithm and spread it over several CPU's. Below is how far I've gotten:
Preamble
import itertools
import random
import numpy as np
import dask.array
from dask.distributed import Client
The distance function is defined by
def distance(u, v):
result = np.empty([int((len(u)*(len(u)+1))/2)], dtype=float)
for i, j in itertools.product(range(len(u)),range(len(v))):
if j <= i:
differences = []
k = int(((i*(i+1))/2 +j-1)+1)
for x,y in itertools.product(u[i], v[j]):
difference = np.abs(np.array(x) - np.array(y)).sum(axis=1)
differences.apply(difference)
result[k] = np.mean(differences)
return result
and returns an array of length n*(n+1)/2 to describe the lower triangular matrix for this block of the distance matrix.
def distance_matrix(X):
X = np.asarray(X, dtype=object)
X = dask.array.from_array(X, (100, 20, 10)).astype(float)
print("chunksize: ", X.chunksize)
resulting_length = [int((X.chunksize[0]*(X.chunksize[0])+1)/2)]
result = dask.array.map_blocks(distance, X, X, chunks=(resulting_length), drop_axis=[1,2], dtype=float)
return result.compute()
I split up the input array in chunks and use dask.array.map_blocks to apply the distance calculation to all the blocks.
if __name__ == '__main__':
workers = 6
X = np.array([[[random.random() for _ in range(10)] for _ in range(20)] for _ in range(1000)])
client = Client(n_workers=workers)
results = similarity_matrix(X)
client.close()
print(results)
Unfortunately, this approach returns the wrong length of array at the end of the process. Would somebody to help me out here? I don't have much experience in distributed computing.

I'm a big fan of dask, but this problem is way too small to need it. The runtime issue you're seeing is because you are looping through each element in python rather than using vectorized operations in numpy.
As with many packages in python, numpy relies on highly efficient compiled code written in other, faster languages such as C to carry out array operations. When you do something like an array operation A + B numpy calls these fast routines once, and the array operations are carried out within a highly optimized C routine. There is overhead involved with making calls to other languages, but this is overwhelmed by the performance gain due to the single call to a very fast routine. If instead you loop over every element, adding cell-wise, you have a (slow) python process, and on each element, this calls the C code, which adds overhead for each element of the array. Because of this, you actually would be better off not using numpy if you're going to do this once for each element.
To implement this in a vectorized manner, you can exploit numpy's broadcasting rules to ensure the first dimensions of your two arrays expand to a new dimension. I don't totally understand what's going on in your distance function, but you could extend this simple version to do whatever you want:
In [1]: import numpy as np
In [2]: A = np.random.random((1000, 20))
...: B = np.random.random((1000, 20))
In [3]: distance = np.abs(A[:, np.newaxis, :] - B[np.newaxis, :, :]).sum(axis=-1)
In [4]: distance
Out[4]:
array([[7.22985776, 7.76185666, 5.61824886, ..., 7.62092039, 6.35189562,
7.06365986],
[5.73359499, 5.8422105 , 7.2644021 , ..., 5.72230353, 6.79390303,
5.03074007],
[7.27871151, 8.6856818 , 5.97489449, ..., 8.86620029, 7.49875638,
6.57389575],
...,
[7.67783107, 7.24419076, 4.17941596, ..., 8.68674754, 6.65078093,
5.67279811],
[7.1550136 , 6.10590227, 5.75417987, ..., 7.05953998, 5.8306628 ,
6.55112672],
[5.81748615, 6.79246838, 6.95053088, ..., 7.63994705, 6.77720511,
7.5663236 ]])
In [5]: distance.shape
Out[5]: (1000, 1000)
The performance difference can be seen clearly against a looped implementation:
In [6]: %%timeit
...: np.abs(A[:, np.newaxis, :] - B[np.newaxis, :, :]).sum(axis=-1)
...:
...:
45 ms ± 326 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [7]: %%timeit
...: distances = np.empty((1000, 1000))
...: for i in range(1000):
...: for j in range(1000):
...: distances[i, j] = np.abs(A[i, :] - B[j, :]).sum()
...:
2.42 s ± 7.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
The looped version takes more than 50x as long!

Related

Slower time series simulation with Numba

I would like to use the #njit decorator from Numba on this code which, given matrices A,B,C,D produces a sample from the state-space model
x_n = A#x_{n-1} + B#v_n
y_n = C#x_n + D#v_n
#njit
def generate_Y_state_space(N, A, B, C, D):
"""
Simulate M-dimensional time series given state space model defined by A,B,C,D.
"""
M = A_sim.shape[0]
v = np.random.normal(0,1/np.sqrt(2),(M,N)) + 1j*np.random.normal(0,1/np.sqrt(2),(M,N)) # complex gaussian randomly variable
x = np.zeros((M,N),dtype='c16') # 'c16' is the numba type for complex128
y = np.zeros((M,N),dtype='c16')
#initialization
x[:,0] = v[:,0]
y[:,0] = C#x[:,0] + D#v[:,0]
for i in range(1,N):
x[:,i] = A#x[:,i-1] + B#v[:,i]
y[:,i] = C#x[:,i] + D#v[:,i]
return y
However, without the njit decorator, I get the following performance (N=1000, M=100)
%timeit generate_Y_state_space(N, A, B, C, D)
27.9 ms ± 728 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
while with the njit decorator, the performance has not really improved:
%timeit generate_Y_state_space(N, A, B, C, D)
24.1 ms ± 6.21 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
I wonder if the Numba implementation of the matrix multiplication is in fact not better than the Numpy one... Do you have any idea of how could I improve this code ?
Edit : I think that Numba could be able to provide a nice performance improvement not on the matrix multiplication (as Numpy is already pretty fast as pointed out), but more on the for loop (which is necessary here since the whole point of a time series is to generate a new data point as a transformation of the previous one).
One possible reason why you get a slight decrease in performance with Numba is that you need at least to use fastmath=True in the #njit decorator to be as fast as Numpy which internally use it.
Another reason is that the #njit decorator compile the function at runtime which is a bit slow (and takes often more than 28 ms). You should be careful not to include this compilation time in the benchmark. You can specify the types in the decorator to that Numba can compile the function before the first call (ahead of time). Here is an example:
#njit('c16[:,::1](int64, c16[:,::1], c16[:,::1], c16[:,::1], c16[:,::1])')
Moreover, you do not need to zero-initialize the arrays: x and y can be left uninitialized.
Finally, you can speed up the computation using parallelism. This is not straightforward here as there is a temporal dependency on x[:,i]. However, B#v[:,i] and D#v[:,i] can be computed in parallel for example. Thus, you can use the parameter parallel=True and prange rather than range.
Here is an (untested) example:
#njit('c16[:,::1](int64, c16[:,::1], c16[:,::1], c16[:,::1], c16[:,::1])', fastmath=True, parallel=True)
def generate_Y_state_space(N, A, B, C, D):
"""
Simulate M-dimensional time series given state space model defined by A,B,C,D.
"""
M = A_sim.shape[0]
v = np.random.normal(0,1/np.sqrt(2),(M,N)) + 1j*np.random.normal(0,1/np.sqrt(2),(M,N)) # complex gaussian randomly variable
x = np.empty((M,N),dtype='c16') # 'c16' is the numba type for complex128
y = np.empty((M,N),dtype='c16')
#initialization
x[:,0] = v[:,0]
y[:,0] = C#x[:,0] + D#v[:,0]
for i in prange(1,N):
x[:,i] = B#v[:,i]
y[:,i] = D#v[:,i]
for i in range(1,N):
x[:,i] += A#x[:,i-1]
y[:,i] += C#x[:,i]
return y
Parallelism will not necessary always make the code faster, but it should worth a try on desktop machine.

Computing distances from points to a line in vector form efficiently

Computing the distance of a point to a line in vector form is trivial.
However, I implemented it the following way, which is extremely slow:
def compute_point_distance_to_line(point):
dist = np.linalg.norm((a - point) - np.vdot((a - point), n) * n)
return dist
np.apply_along_axis(compute_point_distance_to_line, 1, xyz)
I used the notation from wikipedia, xyz shape is (2521909, 3), shape of a, n and point is consequently (3,)
I tried it the following way:
def compute_point_distance_to_line2(points):
_a = np.tile(a, (points.shape[0], 1))
_n = np.tile(n, (points.shape[0], 1))
_n_t = np.ascontiguousarray(np.swapaxes(_n, 0, 1))
diffs = _a - points
vdots_scaled = np.dot(diffs, _n_t) * n
diffs = diffs - vdots_scaled
return np.linalg.norm(diffs, axis=1)
Unfortunately, for me this results in a Memory Error when computing the dot product.
Are there any faster ways?
You can vectorize this with something like:
temp = np.subtract(a, xyz) # so we only have to compute this once
dist = np.linalg.norm(np.subtract(temp, np.multiply(np.dot(temp, n)[:, None], n)),
axis=-1)
# 220 ms ± 6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Compared with timing for your first code example above:
# 30 s ± 1.89 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
It's giving me the same result as your first code example (checked with np.array_equal()), and it is a couple of orders of magnitude faster.
Explanation
The trick is getting the np.multiply() call to work correctly by adding an extra axis to the result of np.dot(), which I do with the [:, None] slice after np.dot(). Basically None used in numpy slices is a shortcut for adding an axis, so the result of np.dot() for you should have shape (2521909,), and after the brackets with None, it will have shape (2521909, 1). The result of np.multiply() (and temp) will then have shape (2521909, 3), and we take the norm along the last axis to get the 3-dimensional distance from the line to each of your 2521909 points.
In general, try not to use operations like tile when you can use broadcasting instead, especially when speed/memory is an issue.
https://docs.scipy.org/doc/numpy-1.15.0/user/basics.broadcasting.html
With a little algebra you can write this as a single matrix vector product, followed by the norm. This may help you avoid temporary variables and save on memory.
Here is a working example. Note that in this example all the 3D vectors are column vectors so p is 3x1000 instead of 1000x3. You will have to transpose your p to plug it into this example.
import numpy as np
# define an example line with unit n
a = np.array([1,2,3])
n = np.array([4,5,6])
norm2n = np.sum(n**2)
n = n/np.sqrt(norm2n)
# get some point data p
p = np.random.randn(3,1000)
# form the projection matrix (see use of None in broadcasting at link above)
P = np.eye(3) - n[:,None]*n[None,:]
# perform the projection using matrix multiplication
projected = P.dot(a[:,None]-p)
# get the distance
dist = np.sqrt(np.sum(projected**2, axis=0))

np.shuffle much slower than np.random.choice

I have an array of shape (N, 3) and I'd like to randomly shuffle the rows. N is on the order of 100,000.
I discovered that np.random.shuffle was bottlenecking my application. I tried replacing the shuffle with a call to np.random.choice and experienced a 10x speed-up. What's going on here? Why is it so much faster to call np.random.choice? Does the np.random.choice version generate a uniformly distributed shuffle?
import timeit
task_choice = '''
N = 100000
x = np.zeros((N, 3))
inds = np.random.choice(N, N, replace=False)
x[np.arange(N), :] = x[inds, :]
'''
task_shuffle = '''
N = 100000
x = np.zeros((N, 3))
np.random.shuffle(x)
'''
task_permute = '''
N = 100000
x = np.zeros((N, 3))
x = np.random.permutation(x)
'''
setup = 'import numpy as np'
timeit.timeit(task_choice, setup=setup, number=10)
>>> 0.11108078400138766
timeit.timeit(task_shuffle, setup=setup, number=10)
>>> 1.0411593900062144
timeit.timeit(task_permute, setup=setup, number=10)
>>> 1.1140159380011028
Edit: For anyone curious, I decided to go with the following solution since it is readable and outperformed all other methods in my benchmarks:
task_ind_permute = '''
N = 100000
x = np.zeros((N, 3))
inds = np.random.permutation(N)
x[np.arange(N), :] = x[inds, :]
'''
You're comparing very different sized arrays here. In your first example, although you create an array of zeros, you simply use random.choice(100000, 100000), which pulls 100000 random values between 1-100000. In your second example your are shuffling a (100000, 3) shape array.
>>> x.shape
(100000, 3)
>>> np.random.choice(N, N, replace=False).shape
(100000,)
Timings on more equivalent samples:
In [979]: %timeit np.random.choice(N, N, replace=False)
2.6 ms ± 201 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [980]: x = np.arange(100000)
In [981]: %timeit np.random.shuffle(x)
2.29 ms ± 67.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [982]: x.shape == np.random.choice(N, N, replace=False).shape
Out[982]: True
permutation and shuffle are linked, in fact permutation calls shuffle under the hood!!
The reason why shuffle is slower than permutation for multidimensional array is that permutation only need to shuffle the index along the first axis. Thus becomes a special case of shuffle of 1d array (the 1st if-else block).
This special case is also explained in the source as well:
# We trick gcc into providing a specialized implementation for
# the most common case, yielding a ~33% performance improvement.
# Note that apparently, only one branch can ever be specialized.
For shuffle on the otherhand, multidimensional ndarray operation requires a bounce buffer, creating that buffer, especially when the dimension is relative big, becomes expensive. Additionally, we can no longer use the trick mentioned above that helps the 1d case.
With replace=False and using choice to generate a new array of the same size, choice and permutation is the same, see here. The extra time would have to come from the time spend in creating intermediate index arrays.

When to use .shape and when to use .reshape?

I ran into a memory problem when trying to use .reshape on a numpy array and figured if I could somehow reshape the array in place that would be great.
I realised that I could reshape arrays by simply changing the .shape value.
Unfortunately when I tried using .shape I again got a memory error which has me thinking that it doesn't reshape in place.
I was wondering when do I use one when do I use the other?
Any help is appreciated.
If you want additional information please let me know.
EDIT:
I added my code and how the matrix I want to reshape is created in case that is important.
Change the N value depending on your memory.
import numpy as np
N = 100
a = np.random.rand(N, N)
b = np.random.rand(N, N)
c = a[:, np.newaxis, :, np.newaxis] * b[np.newaxis, :, np.newaxis, :]
c = c.reshape([N*N, N*N])
c.shape = ([N, N, N, N])
EDIT2:
This is a better representation. Apparently the transpose seems to be important as it changes the arrays from C-contiguous to F-contiguous, and the resulting multiplication in above case is contiguous while in the one below it is not.
import numpy as np
N = 100
a = np.random.rand(N, N).T
b = np.random.rand(N, N).T
c = a[:, np.newaxis, :, np.newaxis] * b[np.newaxis, :, np.newaxis, :]
c = c.reshape([N*N, N*N])
c.shape = ([N, N, N, N])
numpy.reshape will copy the data if it can't make a proper view, whereas setting the shape will raise an error instead of copying the data.
It is not always possible to change the shape of an array without copying the data. If you want an error to be raise if the data is copied, you should assign the new shape to the shape attribute of the array.
I would like to revisit this question focusing on OOP paradigm, despite memory issues presented as the problem.
When to use .shape and when to use .reshape?
OOP principle of Encapsulation
Following OOP paradigms, since shape is a property of the object numpy.array it is always advisable to call an object.method to change properties. This adheres to OOP principle of encapsulation.
Performance Issues
As for performance, there seems to be no difference.
import numpy as np
# creates an array of 1,000,000 random floats
a = np.array(np.random.rand(1_000_000))
# (1000000,)
a.shape
# using IPython to time both operations resulted in
# 201 ns ± 4.85 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
%timeit a.shape = (5_000, 200)
# 217 ns ± 0.957 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%timeit a.reshape (5_000, 200)
Running hardware
OS : Linux 4.15.0-142-generic #146~16.04.1-Ubuntu
CPU: Intel(R) Core(TM) i3-4170 CPU # 3.70GHz 4 cores
RAM: 16BG

Is there a way to efficiently invert an array of matrices with numpy?

Normally I would invert an array of 3x3 matrices in a for loop like in the example below. Unfortunately for loops are slow. Is there a faster, more efficient way to do this?
import numpy as np
A = np.random.rand(3,3,100)
Ainv = np.zeros_like(A)
for i in range(100):
Ainv[:,:,i] = np.linalg.inv(A[:,:,i])
It turns out that you're getting burned two levels down in the numpy.linalg code. If you look at numpy.linalg.inv, you can see it's just a call to numpy.linalg.solve(A, inv(A.shape[0]). This has the effect of recreating the identity matrix in each iteration of your for loop. Since all your arrays are the same size, that's a waste of time. Skipping this step by pre-allocating the identity matrix shaves ~20% off the time (fast_inverse). My testing suggests that pre-allocating the array or allocating it from a list of results doesn't make much difference.
Look one level deeper and you find the call to the lapack routine, but it's wrapped in several sanity checks. If you strip all these out and just call lapack in your for loop (since you already know the dimensions of your matrix and maybe know that it's real, not complex), things run MUCH faster (Note that I've made my array larger):
import numpy as np
A = np.random.rand(1000,3,3)
def slow_inverse(A):
Ainv = np.zeros_like(A)
for i in range(A.shape[0]):
Ainv[i] = np.linalg.inv(A[i])
return Ainv
def fast_inverse(A):
identity = np.identity(A.shape[2], dtype=A.dtype)
Ainv = np.zeros_like(A)
for i in range(A.shape[0]):
Ainv[i] = np.linalg.solve(A[i], identity)
return Ainv
def fast_inverse2(A):
identity = np.identity(A.shape[2], dtype=A.dtype)
return array([np.linalg.solve(x, identity) for x in A])
from numpy.linalg import lapack_lite
lapack_routine = lapack_lite.dgesv
# Looking one step deeper, we see that solve performs many sanity checks.
# Stripping these, we have:
def faster_inverse(A):
b = np.identity(A.shape[2], dtype=A.dtype)
n_eq = A.shape[1]
n_rhs = A.shape[2]
pivots = zeros(n_eq, np.intc)
identity = np.eye(n_eq)
def lapack_inverse(a):
b = np.copy(identity)
pivots = zeros(n_eq, np.intc)
results = lapack_lite.dgesv(n_eq, n_rhs, a, n_eq, pivots, b, n_eq, 0)
if results['info'] > 0:
raise LinAlgError('Singular matrix')
return b
return array([lapack_inverse(a) for a in A])
%timeit -n 20 aI11 = slow_inverse(A)
%timeit -n 20 aI12 = fast_inverse(A)
%timeit -n 20 aI13 = fast_inverse2(A)
%timeit -n 20 aI14 = faster_inverse(A)
The results are impressive:
20 loops, best of 3: 45.1 ms per loop
20 loops, best of 3: 38.1 ms per loop
20 loops, best of 3: 38.9 ms per loop
20 loops, best of 3: 13.8 ms per loop
EDIT: I didn't look closely enough at what gets returned in solve. It turns out that the 'b' matrix is overwritten and contains the result in the end. This code now gives consistent results.
A few things have changed since this question was asked and answered, and now numpy.linalg.inv supports multidimensional arrays, handling them as stacks of matrices with matrix indices being last (in other words, arrays of shape (...,M,N,N)). This seems to have been introduced in numpy 1.8.0. Unsurprisingly this is by far the best option in terms of performance:
import numpy as np
A = np.random.rand(3,3,1000)
def slow_inverse(A):
"""Looping solution for comparison"""
Ainv = np.zeros_like(A)
for i in range(A.shape[-1]):
Ainv[...,i] = np.linalg.inv(A[...,i])
return Ainv
def direct_inverse(A):
"""Compute the inverse of matrices in an array of shape (N,N,M)"""
return np.linalg.inv(A.transpose(2,0,1)).transpose(1,2,0)
Note the two transposes in the latter function: the input of shape (N,N,M) has to be transposed to shape (M,N,N) for np.linalg.inv to work, then the result has to be permuted back to shape (M,N,N).
A check and timing results using IPython, on python 3.6 and numpy 1.14.0:
In [5]: np.allclose(slow_inverse(A),direct_inverse(A))
Out[5]: True
In [6]: %timeit slow_inverse(A)
19 ms ± 138 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [7]: %timeit direct_inverse(A)
1.3 ms ± 6.39 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Numpy-Blas calls are not always the fastest possibility
On problems where you have to calculate lots of inverses, eigenvalues, dot-products of small 3x3 matrices or similar cases, numpy-MKL which I use can often be outperformed by quite a margin.
This external Blas routines are usually made for problems with larger matrices, for smaller ones you can write out a standard algorithm or take a look at eg. Intel IPP.
Please keep also in mind that Numpy uses C-ordered arrays by default (last dimension changes fastest).
For this example I took the code from Matrix inversion (3,3) python - hard coded vs numpy.linalg.inv and modified it a bit.
import numpy as np
import numba as nb
import time
#nb.njit(fastmath=True)
def inversion(m):
minv=np.empty(m.shape,dtype=m.dtype)
for i in range(m.shape[0]):
determinant_inv = 1./(m[i,0]*m[i,4]*m[i,8] + m[i,3]*m[i,7]*m[i,2] + m[i,6]*m[i,1]*m[i,5] - m[i,0]*m[i,5]*m[i,7] - m[i,2]*m[i,4]*m[i,6] - m[i,1]*m[i,3]*m[i,8])
minv[i,0]=(m[i,4]*m[i,8]-m[i,5]*m[i,7])*determinant_inv
minv[i,1]=(m[i,2]*m[i,7]-m[i,1]*m[i,8])*determinant_inv
minv[i,2]=(m[i,1]*m[i,5]-m[i,2]*m[i,4])*determinant_inv
minv[i,3]=(m[i,5]*m[i,6]-m[i,3]*m[i,8])*determinant_inv
minv[i,4]=(m[i,0]*m[i,8]-m[i,2]*m[i,6])*determinant_inv
minv[i,5]=(m[i,2]*m[i,3]-m[i,0]*m[i,5])*determinant_inv
minv[i,6]=(m[i,3]*m[i,7]-m[i,4]*m[i,6])*determinant_inv
minv[i,7]=(m[i,1]*m[i,6]-m[i,0]*m[i,7])*determinant_inv
minv[i,8]=(m[i,0]*m[i,4]-m[i,1]*m[i,3])*determinant_inv
return minv
#I was to lazy to modify the code from the link above more thoroughly
def inversion_3x3(m):
m_TMP=m.reshape(m.shape[0],9)
minv=inversion(m_TMP)
return minv.reshape(minv.shape[0],3,3)
#Testing
A = np.random.rand(1000000,3,3)
#Warmup to not measure compilation overhead on the first call
#You may also use #nb.njit(fastmath=True,cache=True) but this has also about 0.2s
#overhead on fist call
Ainv = inversion_3x3(A)
t1=time.time()
Ainv = inversion_3x3(A)
print(time.time()-t1)
t1=time.time()
Ainv2 = np.linalg.inv(A)
print(time.time()-t1)
print(np.allclose(Ainv2,Ainv))
Performance
np.linalg.inv: 0.36 s
inversion_3x3: 0.031 s
For loops are indeed not necessarily much slower than the alternatives and also in this case, it will not help you much. But here is a suggestion:
import numpy as np
A = np.random.rand(100,3,3) #this is to makes it
#possible to index
#the matrices as A[i]
Ainv = np.array(map(np.linalg.inv, A))
Timing this solution vs. your solution yields a small but noticeable difference:
# The for loop:
100 loops, best of 3: 6.38 ms per loop
# The map:
100 loops, best of 3: 5.81 ms per loop
I tried to use the numpy routine 'vectorize' with the hope of creating an even cleaner solution, but I'll have to take a second look into that. The change of ordering in the array A is probably the most significant change, since it utilises the fact that numpy arrays are ordered column-wise and therefor a linear readout of the data is ever so slightly faster this way.

Categories

Resources