Calculate the triangular matrix of distances between NumPy array of coordinates - python

I have an NumPy array of coordinates. For example purposes, I will use this
In [1]: np.random.seed(123)
In [2]: coor = np.random.randint(10, size=12).reshape(-1,3)
In [3]: coor
Out[3]: array([[2, 2, 6],
[1, 3, 9],
[6, 1, 0],
[1, 9, 0]])
I want the triangular matrix of distances between all coordinates. A simple approach would be to code a double loop over all coordinates
In [4]: n_coor = len(coor)
In [5]: dist = np.zeros((n_coor, n_coor))
In [6]: for j in xrange(n_coor):
for k in xrange(j+1, n_coor):
dist[j, k] = np.sqrt(np.sum((coor[j] - coor[k]) ** 2))
with the result being an upper triangular matrix of the distances
In [7]: dist
Out[7]: array([[ 0. , 3.31662479, 7.28010989, 9.2736185 ],
[ 0. , 0. , 10.48808848, 10.81665383],
[ 0. , 0. , 0. , 9.43398113],
[ 0. , 0. , 0. , 0. ]])
Leveraging NumPy, I can avoid looping using
In [8]: dist = np.sqrt(((coor[:, None, :] - coor) ** 2).sum(-1))
but the result is the entire matrix
In [9]: dist
Out[9]: array([[ 0. , 3.31662479, 7.28010989, 9.2736185 ],
[ 3.31662479, 0. , 10.48808848, 10.81665383],
[ 7.28010989, 10.48808848, 0. , 9.43398113],
[ 9.2736185 , 10.81665383, 9.43398113, 0. ]])
This one line version takes roughly half the time when I use 2048 coordinates (4 s instead of 10 s) but this is doing twice as many calculations as it needs in order to get the symmetric matrix. Is there a way to adjust the one line command to only get the triangular matrix (and the additional 2x speedup, i.e. 2 s)?

We can use SciPy's pdist method to get those distances. So, we just need to initialize the output array and then set the upper triangular values with those distances
from scipy.spatial.distance import pdist
n_coor = len(coor)
dist = np.zeros((n_coor, n_coor))
row,col = np.triu_indices(n_coor,1)
dist[row,col] = pdist(coor)
Alternatively, we can use boolean-indexing to assign values, replacing the last two lines
dist[np.arange(n_coor)[:,None] < np.arange(n_coor)] = pdist(coor)
Runtime test
Functions:
def subscripted_indexing(coor):
n_coor = len(coor)
dist = np.zeros((n_coor, n_coor))
row,col = np.triu_indices(n_coor,1)
dist[row,col] = pdist(coor)
return dist
def boolean_indexing(coor):
n_coor = len(coor)
dist = np.zeros((n_coor, n_coor))
r = np.arange(n_coor)
dist[r[:,None] < r] = pdist(coor)
return dist
Timings:
In [110]: # Setup input array
...: coor = np.random.randint(0,10, (2048,3))
In [111]: %timeit subscripted_indexing(coor)
10 loops, best of 3: 91.4 ms per loop
In [112]: %timeit boolean_indexing(coor)
10 loops, best of 3: 47.8 ms per loop

Related

calculate distance from all points in numpy array to a single point on the basis of index

Suppose a 2d array is given as:
arr = array([[1, 1, 1],
[4, 5, 8],
[2, 6, 9]])
if point=array([1,1]) is given then I want to calculate the euclidean distance from all indices of arr to point (1,1). The result should be
array([[1.41 , 1. , 1.41],
[1. , 0. , 1. ],
[1.41 , 1. , 1.41]])
For loop is too slow to do these computations. Is there any faster method to achieve this using numpy or scipy?
Thanks!!!
Approach #1
You can use scipy.ndimage.morphology.distance_transform_edt -
def distmat(a, index):
mask = np.ones(a.shape, dtype=bool)
mask[index[0],index[1]] = False
return distance_transform_edt(mask)
Approach #2
Another with NumPy-native tools -
def distmat_v2(a, index):
i,j = np.indices(a.shape, sparse=True)
return np.sqrt((i-index[0])**2 + (j-index[1])**2)
Sample run -
In [60]: a
Out[60]:
array([[1, 1, 1],
[4, 5, 8],
[2, 6, 9]])
In [61]: distmat(a, index=[1,1])
Out[61]:
array([[1.41421356, 1. , 1.41421356],
[1. , 0. , 1. ],
[1.41421356, 1. , 1.41421356]])
In [62]: distmat_v2(a, index=[1,1])
Out[62]:
array([[1.41421356, 1. , 1.41421356],
[1. , 0. , 1. ],
[1.41421356, 1. , 1.41421356]])
Benchmarking
Other proposed solution(s) :
# https://stackoverflow.com/a/61629292/3293881 #Ehsan
def norm_method(arr, point):
point = np.asarray(point)
return np.linalg.norm(np.indices(arr.shape, sparse=True)-point)
Using benchit package (few benchmarking tools packaged together; disclaimer: I am its author) to benchmark proposed solutions.
In [66]: import benchit
In [76]: funcs = [distmat, distmat_v2, norm_method]
In [77]: inputs = {n:(np.random.rand(n,n),[1,1]) for n in [3,10,50,100,500,1000,2000,5000]}
In [83]: T = benchit.timings(funcs, inputs, multivar=True, input_name='Length')
In [84]: In [33]: T.plot(logx=True, colormap='Dark2', savepath='plot.png')
So, distmat_v2 seems to be doing really well, We can further improve on it, by leveraging numexpr.
Extend to array of indices
We could extend the listed solutions to cover for the generic/bigger case of list/array of indices w.r.t. whom we need to get euclidean distances at rest of the positions, like so -
def distmat_indices(a, indices):
indices = np.atleast_2d(indices)
mask = np.ones(a.shape, dtype=bool)
mask[indices[:,0],indices[:,1]] = False
return distance_transform_edt(mask)
def distmat_indices_v2(a, indices):
indices = np.atleast_2d(indices)
i,j = np.indices(a.shape, sparse=True)
return np.sqrt(((i-indices[:,0])[...,None])**2 + (j-indices[:,1,None])**2).min(1)
Sample run -
In [143]: a = np.random.rand(4,5)
In [144]: distmat_indices(a, indices=[[2,2],[0,3]])
Out[144]:
array([[2.82842712, 2. , 1. , 0. , 1. ],
[2.23606798, 1.41421356, 1. , 1. , 1.41421356],
[2. , 1. , 0. , 1. , 2. ],
[2.23606798, 1.41421356, 1. , 1.41421356, 2.23606798]])
On top of #Divakar's good solutions, if you are looking for something abstract, you can use:
np.linalg.norm(np.indices(arr.shape, sparse=True)-point)
Note that it works with numpy 1.17+ (argument sparse is added on the versions 1.17+ of numpy). Upgrade your numpy and enjoy.
In case you have older than 1.17 version of numpy , you can add dimensions to your point by using this:
np.linalg.norm(np.indices(arr.shape)-point[:,None,None], axis=0)
output for point=np.array([1,1]) and given array in question:
[[1.41421356 1. 1.41421356]
[1. 0. 1. ]
[1.41421356 1. 1.41421356]]

Fast K-step discounting in numpy/scipy/python

x has shape [batch_size, n_time] where the batches are independent
If k=3, d=discount_rate. Pseudocode:
x[:,i] = x[:,i] + x[:,i+1]*(d**1) + x[:,i+2]*(d**2) + x[:,i+3]*(d**3)
Here's working code, but it's very slow. I'll be executing this function millions of times, so I'm hoping for a faster implementation
import numpy as np
def k_step_discount(x, k, discount_rate):
n_time = x.shape[1]
k_include_cur = k + 1 # k excludes current timestep
for i in range(n_time):
k_cur = min(n_time - i, k_include_cur) # prevent out of bounds
for j in range(1, k_cur):
x[:, i] += x[:, i+j] * (discount_rate ** j)
return x
x = np.array([
[0,0,0,1,0,0],
[0,1,2,3,4,5.]
])
y = k_step_discount(x+0, k=2, discount_rate=.9)
print('x\n{}\ny\n{}'.format(x, y))
>> x
[[ 0. 0. 0. 1. 0. 0.]
[ 0. 1. 2. 3. 4. 5.]]
>> y
[[ 0. 0.81 0.9 1. 0. 0. ]
[ 2.52 5.23 7.94 10.65 8.5 5. ]]
A scipy function that's similar is:
import scipy.signal
import numpy as np
x = np.array([[0,0,0,1,0,0.]])
discount_rate = .9
y = np.flip(scipy.signal.lfilter([1], [1, -discount_rate], np.flip(x+0, 1), axis=1), 1)
print('x\n{}\ny\n{}'.format(x, y))
>> x
[[ 0. 0. 0. 1. 0. 0.]]
>> y
[[ 0.729 0.81 0.9 1. 0. 0. ]]
However, it discounts until the end of n_time rather than only for k steps
I'm also interested in K-step discounting without batches, if that'd be easier/faster
import numpy as np
def k_step_discount_no_batch(x, k, discount_rate):
n_time = x.shape[0]
k_include_cur = k + 1 # k excludes current timestep
for i in range(n_time):
k_cur = min(n_time - i, k_include_cur) # prevent out of bounds
for j in range(1, k_cur):
x[i] += x[i+j] * (discount_rate ** j)
return x
x = np.array([8,0,0,0,1,2.])
y = k_step_discount_no_batch(x+0, k=2, discount_rate=.9)
print('x\n{}\ny\n{}'.format(x, y))
>> x
[ 8. 0. 0. 0. 1. 2.]
>> y
[ 8. 0. 0.81 2.52 2.8 2. ]
Similar no_batch scipy function
import scipy.signal
import numpy as np
x = np.array([8,0,0,0,1,2.])
discount_rate = .9
y = scipy.signal.lfilter([1], [1, -discount_rate], x[::-1], axis=0)[::-1]
print('x\n{}\ny\n{}'.format(x, y))
>> x
[ 8. 0. 0. 0. 1. 2.]
>> y
[ 9.83708 2.0412 2.268 2.52 2.8 2. ]
You could use 2D convolution here. To get the scaling done properly, we need to create the proper 2D kernel, which would be a flipped version of the powered-scaled numbers of discount_rate. This is in accordance with the definition of convolution, in which kernel is slided in the flipped order against the input data and its elements are scaled with those kernel ones and summed up, as precisely done in this case.
Thus, the implementation would be simply -
from scipy.signal import convolve2d as conv2d
import numpy as np
def k_step_discount(x, k, discount_rate, is_batch=True):
if is_batch:
kernel = discount_rate**np.arange(k+1)[::-1][None]
return conv2d(x,kernel)[:,k:]
else:
kernel = discount_rate**np.arange(k+1)[::-1]
return np.convolve(x, kernel)[k:]
Sample run -
In [190]: x
Out[190]:
array([[ 0., 0., 0., 1., 0., 0.],
[ 0., 1., 2., 3., 4., 5.]])
# Proposed method
In [191]: k_step_discount_conv2d(x, k=2, discount_rate=0.9)
Out[191]:
array([[ 0. , 0.81, 0.9 , 1. , 0. , 0. ],
[ 2.52, 5.23, 7.94, 10.65, 8.5 , 5. ]])
# Original loopy method
In [192]: k_step_discount(x, k=2, discount_rate=.9)
Out[192]:
array([[ 0. , 0.81, 0.9 , 1. , 0. , 0. ],
[ 2.52, 5.23, 7.94, 10.65, 8.5 , 5. ]])
Runtime test
In [206]: x = np.random.randint(0,9,(100,1000)).astype(float)
In [207]: %timeit k_step_discount_conv2d(x, k=2, discount_rate=0.9)
1000 loops, best of 3: 1.27 ms per loop
In [208]: %timeit k_step_discount(x, k=2, discount_rate=.9)
100 loops, best of 3: 4.83 ms per loop
With bigger k's :
In [215]: x = np.random.randint(0,9,(100,1000)).astype(float)
In [216]: %timeit k_step_discount_conv2d(x, k=20, discount_rate=0.9)
100 loops, best of 3: 5.44 ms per loop
In [217]: %timeit k_step_discount(x, k=20, discount_rate=.9)
10 loops, best of 3: 44.8 ms per loop
Thus, expect huge speedups with bigger k's!
Further boost
As suggested by #Eric, we could also leverage scipy.ndimage.filters's 1D convolution here.
For a proper comparison listing both with Scipy's 2D and 1D convolution methods -
from scipy.ndimage.filters import convolve1d as conv1d
def using_conv2d(x, k, discount_rate):
kernel = discount_rate**np.arange(k+1)[::-1][None]
return conv2d(x,kernel)[:,k:]
def using_conv1d(x, k, discount_rate):
kernel = discount_rate**np.arange(k+1)[::-1]
return conv1d(x,kernel, mode='constant', origin=k//2)
Timings -
In [100]: x = np.random.randint(0,9,(100,1000)).astype(float)
In [101]: out1 = using_conv2d(x, k=20, discount_rate=0.9)
...: out2 = using_conv1d(x, k=20, discount_rate=0.9)
...:
In [102]: np.allclose(out1, out2)
Out[102]: True
In [103]: %timeit using_conv2d(x, k=20, discount_rate=0.9)
100 loops, best of 3: 5.27 ms per loop
In [104]: %timeit using_conv1d(x, k=20, discount_rate=0.9)
1000 loops, best of 3: 1.43 ms per loop

Numpy matrix combination

I have a rotation matrix and translation vector as corresponding numpy objects. What is the best way to combine them into a 4x4 transform matrix? Are there any functions which allow to avoid dummy element-wise copying?
There are many ways to do this; here are two.
You can create an empty 4x4 array. Then the rotation matrix and the translation vector can each be copied into the 4x4 transform matrix with slice assignment. For example, R and t are the rotation matrix and translation vector, respectively.
In [23]: R
Out[23]:
array([[ 0.51456517, -0.25333656, 0.81917231],
[ 0.16196059, 0.96687621, 0.19727939],
[-0.8420163 , 0.03116053, 0.53855136]])
In [24]: t
Out[24]: array([ 1. , 2. , 0.5])
Create an empty 4x4 array M, and fill it with R and t.
In [25]: M = np.empty((4, 4))
In [26]: M[:3, :3] = R
In [27]: M[:3, 3] = t
In [28]: M[3, :] = [0, 0, 0, 1]
In [29]: M
Out[29]:
array([[ 0.51456517, -0.25333656, 0.81917231, 1. ],
[ 0.16196059, 0.96687621, 0.19727939, 2. ],
[-0.8420163 , 0.03116053, 0.53855136, 0.5 ],
[ 0. , 0. , 0. , 1. ]])
Or you can assemble the transform matrix with functions such as numpy.hstack and numpy.vstack:
In [30]: M = np.vstack((np.hstack((R, t[:, None])), [0, 0, 0 ,1]))
In [31]: M
Out[31]:
array([[ 0.51456517, -0.25333656, 0.81917231, 1. ],
[ 0.16196059, 0.96687621, 0.19727939, 2. ],
[-0.8420163 , 0.03116053, 0.53855136, 0.5 ],
[ 0. , 0. , 0. , 1. ]])
Note that t[:, None] (which could also be spelled t[:, np.newaxis] or t.reshape(-1, 1)) creates a 2-d view of t with shape (3, 1). This makes the shape compatible with M in the call to np.hstack.
In [55]: t[:, None]
Out[55]:
array([[ 1. ],
[ 2. ],
[ 0.5]])

Plot parametric mean in Python

I have a large real 1-d data set called r. I would like plot:
mean(log(1+a*r)) vs a, with a > -1 .
This is my code:
rr=pd.read_csv('goog.csv')
dd=rr['Close']
series=pd.Series(dd)
seriespct=series.pct_change()
seriespct[0]=seriespct.mean()
dum1 =[0]*len(dd)
a=1.
a_max = 1.
a_step = 0.01
a = scipy.arange(-3.+a_step, a_max, a_step)
n = len(a)
dum2 =[0]*n
m=len(dd)
for j in range(n):
for i in range(m):
dum1[i]=math.log(1+a[j]*seriespct[i])
dum2[j]=scipy.mean(dum1)
plt.plot(a,dum2)
plt.show()
How can I do this in a more elgant way?
I would recommend this:
plt.plot(a, np.log(1 + r*a[:,None]).mean(1))
This has a big speed advantage because it avoids for-loops, and loops done in numpy are significantly faster in case your dataset is large.
In [49]: a = np.arange(a_step-.3, a_max, a_step)
In [50]: r = np.random.random(100)
In [51]: timeit [scipy.mean(log(1+a[i]*r)) for i in range(len(a))]
100 loops, best of 3: 5.47 ms per loop
In [52]: timeit np.log(1 + r*a[:,None]).mean(1)
1000 loops, best of 3: 384 µs per loop
It works by broadcasting so that a varies along one axis and r along another, then you can take the mean just along the axis that r varies along, so you still have an array that varies with a (and has the same shape as a):
import numpy as np
import matplotlib.pyplot as plt
r = np.random.random(100)
a = 1.
a_max = 1.
a_step = 0.01
a = np.arange(a_step-.3, a_max, a_step)
a.shape
#(129,)
a = a[:,None] #adds a new axis, making this a column vector, same as: a = a.reshape(-1,1)
a.shape
#(129, 1)
(a*r).shape
#(129, 100)
loga = np.log(1 + a*r)
loga.shape
#(129,100)
mloga = loga.mean(axis=1) #take the mean along the 2nd axis where `a` varies
mloga.shape
#(129,)
plt.plot(a, mloga)
plt.show()
ADDENDUM:
To avoid dependency on broadcasting, you can use np.outer:
plt.plot(a, np.log(1 + np.outer(a,r)).mean(1))
Which has no need for reshaping a (skip the step a = a[:,None])
Here's a simpler example, so you can see what's happening:
r = np.exp(np.arange(1,5))
a = np.arange(5)
In [33]: r
Out[33]: array([ 2.71828183, 7.3890561 , 20.08553692, 54.59815003])
In [34]: a
Out[34]: array([0, 1, 2, 3, 4])
In [39]: r*a[:,None]
Out[39]:
# this is 2.7... 7.3... 20.08... 54.5... # times:
array([[ 0. , 0. , 0. , 0. ], # 0
[ 2.71828183, 7.3890561 , 20.08553692, 54.59815003], # 1
[ 5.43656366, 14.7781122 , 40.17107385, 109.19630007], # 2
[ 8.15484549, 22.1671683 , 60.25661077, 163.7944501 ], # 3
[ 10.87312731, 29.5562244 , 80.34214769, 218.39260013]]) # 4
In [40]: np.outer(a,r)
Out[40]:
array([[ 0. , 0. , 0. , 0. ],
[ 2.71828183, 7.3890561 , 20.08553692, 54.59815003],
[ 5.43656366, 14.7781122 , 40.17107385, 109.19630007],
[ 8.15484549, 22.1671683 , 60.25661077, 163.7944501 ],
[ 10.87312731, 29.5562244 , 80.34214769, 218.39260013]])
# this is the mean of each column:
In [41]: (np.outer(a,r)).mean(1)
Out[41]: array([ 0. , 21.19775622, 42.39551244, 63.59326866, 84.79102488])
# and the log of 1 + the above is:
In [42]: np.log(1+(np.outer(a,r)).mean(1))
Out[42]: array([ 0. , 3.09999121, 3.77035604, 4.16811021, 4.4519144 ])
You can use scipy to do means.
You can use matplotlib to do plotting.
import scipy
from matplotlib import pyplot
#convert r from a python list to an 1-D array
r = scipy.array(r)
#edit these
a_max = 100
a_step = 0.1
a = scipy.arange(-1+a_step, a_max, a_step)
n = len(a)
pyplot.plot(a, [scipy.mean(log(1+a[i]*r)) for i in range(n)], 'b-')
pyplot.show()

Scipy Sparse Matrix special substraction

I'm doing a project and I'm doing a lot of matrix computation in it.
I'm looking for a smart way to speed up my code. In my project, I'm dealing with a sparse matrix of size 100Mx1M with around 10M non-zeros values. The example below is just to see my point.
Let's say I have:
A vector v of size (2)
A vector c of size (3)
A sparse matrix X of size (2,3)
v = np.asarray([10, 20])
c = np.asarray([ 2, 3, 4])
data = np.array([1, 1, 1, 1])
row = np.array([0, 0, 1, 1])
col = np.array([1, 2, 0, 2])
X = coo_matrix((data,(row,col)), shape=(2,3))
X.todense()
# matrix([[0, 1, 1],
# [1, 0, 1]])
Currently I'm doing:
result = np.zeros_like(v)
d = scipy.sparse.lil_matrix((v.shape[0], v.shape[0]))
d.setdiag(v)
tmp = d * X
print tmp.todense()
#matrix([[ 0., 10., 10.],
# [ 20., 0., 20.]])
# At this point tmp is csr sparse matrix
for i in range(tmp.shape[0]):
x_i = tmp.getrow(i)
result += x_i.data * ( c[x_i.indices] - x_i.data)
# I only want to do the subtraction on non-zero elements
print result
# array([-430, -380])
And my problem is the for loop and especially the subtraction.
I would like to find a way to vectorize this operation by subtracting only on the non-zero elements.
Something to get directly the sparse matrix on the subtraction:
matrix([[ 0., -7., -6.],
[ -18., 0., -16.]])
Is there a way to do this smartly ?
You don't need to loop over the rows to do what you are already doing. And you can use a similar trick to perform the multiplication of the rows by the first vector:
import scipy.sparse as sps
# number of nonzero entries per row of X
nnz_per_row = np.diff(X.indptr)
# multiply every row by the corresponding entry of v
# You could do this in-place as:
# X.data *= np.repeat(v, nnz_per_row)
Y = sps.csr_matrix((X.data * np.repeat(v, nnz_per_row), X.indices, X.indptr),
shape=X.shape)
# subtract from the non-zero entries the corresponding column value in c...
Y.data -= np.take(c, Y.indices)
# ...and multiply by -1 to get the value you are after
Y.data *= -1
To see that it works, set up some dummy data
rows, cols = 3, 5
v = np.random.rand(rows)
c = np.random.rand(cols)
X = sps.rand(rows, cols, density=0.5, format='csr')
and after run the code above:
>>> x = X.toarray()
>>> mask = x == 0
>>> x *= v[:, np.newaxis]
>>> x = c - x
>>> x[mask] = 0
>>> x
array([[ 0.79935123, 0. , 0. , -0.0097763 , 0.59901243],
[ 0.7522559 , 0. , 0.67510109, 0. , 0.36240006],
[ 0. , 0. , 0.72370725, 0. , 0. ]])
>>> Y.toarray()
array([[ 0.79935123, 0. , 0. , -0.0097763 , 0.59901243],
[ 0.7522559 , 0. , 0.67510109, 0. , 0.36240006],
[ 0. , 0. , 0.72370725, 0. , 0. ]])
The way you are accumulating your result requires that there are the same number of non-zero entries in every row, which seems a pretty weird thing to do. Are you sure that is what you are after? If that's really what you want you could get that value with something like:
result = np.sum(Y.data.reshape(Y.shape[0], -1), axis=0)
but I have trouble believing that is really what you are after...

Categories

Resources