Related
I have a TF-IDF matrix of shape (149,1001). What is want is to compute the cosine similarity of last columns, with all columns
Here is what I did
from numpy import dot
from numpy.linalg import norm
for i in range(mat.shape[1]-1):
cos_sim = dot(mat[:,i], mat[:,-1])/(norm(mat[:,i])*norm(mat[:,-1]))
cos_sim
But this loop is making it slow. So, is there any efficient way? I want to do with numpy only
Leverage 2D vectorized matrix-multiplication
Here's one with NumPy using matrix-multiplication on 2D data -
p1 = mat[:,-1].dot(mat[:,:-1])
p2 = norm(mat[:,:-1],axis=0)*norm(mat[:,-1])
out1 = p1/p2
Explanation : p1 is the vectorized equivalent of looping of dot(mat[:,i], mat[:,-1]). p2 is of (norm(mat[:,i])*norm(mat[:,-1])).
Sample run for verification -
In [57]: np.random.seed(0)
...: mat = np.random.rand(149,1001)
In [58]: out = np.empty(mat.shape[1]-1)
...: for i in range(mat.shape[1]-1):
...: out[i] = dot(mat[:,i], mat[:,-1])/(norm(mat[:,i])*norm(mat[:,-1]))
In [59]: p1 = mat[:,-1].dot(mat[:,:-1])
...: p2 = norm(mat[:,:-1],axis=0)*norm(mat[:,-1])
...: out1 = p1/p2
In [60]: np.allclose(out, out1)
Out[60]: True
Timings -
In [61]: %%timeit
...: out = np.empty(mat.shape[1]-1)
...: for i in range(mat.shape[1]-1):
...: out[i] = dot(mat[:,i], mat[:,-1])/(norm(mat[:,i])*norm(mat[:,-1]))
18.5 ms ± 977 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [62]: %%timeit
...: p1 = mat[:,-1].dot(mat[:,:-1])
...: p2 = norm(mat[:,:-1],axis=0)*norm(mat[:,-1])
...: out1 = p1/p2
939 µs ± 29.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
# #yatu's soln
In [89]: a = mat
In [90]: %timeit cosine_similarity(a[None,:,-1] , a.T[:-1])
2.47 ms ± 461 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Further optimize on norm with einsum
Alternatively, we could compute p2 with np.einsum.
So, norm(mat[:,:-1],axis=0) could be replaced by :
np.sqrt(np.einsum('ij,ij->j',mat[:,:-1],mat[:,:-1]))
Hence, giving us a modified p2 :
p2 = np.sqrt(np.einsum('ij,ij->j',mat[:,:-1],mat[:,:-1]))*norm(mat[:,-1])
Timings on same setup as earlier -
In [82]: %%timeit
...: p1 = mat[:,-1].dot(mat[:,:-1])
...: p2 = np.sqrt(np.einsum('ij,ij->j',mat[:,:-1],mat[:,:-1]))*norm(mat[:,-1])
...: out1 = p1/p2
607 µs ± 132 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
30x+ speedup over loopy one!
There's an sklearn function to compute the cosine similarity between vectors, cosine_similarity. Here's a use case with an example array:
a = np.random.randint(0,10,(5,5))
print(a)
array([[5, 2, 0, 4, 1],
[4, 2, 8, 2, 4],
[9, 7, 4, 9, 7],
[4, 6, 0, 1, 3],
[1, 1, 2, 5, 0]])
from sklearn.metrics.pairwise import cosine_similarity
cosine_similarity(a[None,:,-1] , a.T[:-1])
# array([[0.94022805, 0.91705665, 0.75592895, 0.79921221, 1. ]])
Where a[None,-1] is the last column in a, reshaped so that both matrices have equally shaped Mat.shape[1], which is a requirement of the function:
a[None,:,-1]
# array([[1, 4, 7, 3, 0]])
And by transposing, the result will be the cosine_similarity with all other columns.
Check with the solution from the question:
from numpy import dot
from numpy.linalg import norm
cos_sim = []
for i in range(a.shape[1]-1):
cos_sim.append(dot(a[:,i], a[:,-1])/(norm(a[:,i])*norm(a[:,-1])))
np.allclose(cos_sim, cosine_similarity(a[None,:,-1] , a.T[:-1]))
# True
I want to index an array with a boolean mask through multiple boolean arrays without a loop.
This is what I want to achieve but without a loop and only with numpy.
import numpy as np
a = np.array([[0, 1],[2, 3]])
b = np.array([[[1, 0], [1, 0]], [[0, 0], [1, 1]]], dtype=bool)
r = []
for x in b:
print(a[x])
r.extend(a[x])
# => array([0, 2])
# => array([2, 3])
print(r)
# => [0, 2, 2, 3]
# what I would like to do is something like this
r = some_fancy_indexing_magic_with_b_and_a
print(r)
# => [0, 2, 2, 3]
Approach #1
Simply broadcast a to b's shape with np.broadcast_to and then mask it with b -
In [15]: np.broadcast_to(a,b.shape)[b]
Out[15]: array([0, 2, 2, 3])
Approach #2
Another would be getting all the indices and mod those by the size of a, which would also be the size of each 2D block in b and then indexing into flattened a -
a.ravel()[np.flatnonzero(b)%a.size]
Approach #3
On the same lines as App#2, but keeping the 2D format and using non-zero indices along the last two axes of b -
_,r,c = np.nonzero(b)
out = a[r,c]
Timings on large arrays (given sample shapes scaled up by 100x) -
In [50]: np.random.seed(0)
...: a = np.random.rand(200,200)
...: b = np.random.rand(200,200,200)>0.5
In [51]: %timeit np.broadcast_to(a,b.shape)[b]
45.5 ms ± 381 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [52]: %timeit a.ravel()[np.flatnonzero(b)%a.size]
94.6 ms ± 1.64 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [53]: %%timeit
...: _,r,c = np.nonzero(b)
...: out = a[r,c]
128 ms ± 1.46 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
I have two numpy arrays, A and B. A conatains unique values and B is a sub-array of A.
Now I am looking for a way to get the index of B's values within A.
For example:
A = np.array([1,2,3,4,5,6,7,8,9,10])
B = np.array([1,7,10])
# I need a function fun() that:
fun(A,B)
>> 0,6,9
You can use np.in1d with np.nonzero -
np.nonzero(np.in1d(A,B))[0]
You can also use np.searchsorted, if you care about maintaining the order -
np.searchsorted(A,B)
For a generic case, when A & B are unsorted arrays, you can bring in the sorter option in np.searchsorted, like so -
sort_idx = A.argsort()
out = sort_idx[np.searchsorted(A,B,sorter = sort_idx)]
I would add in my favorite broadcasting too in the mix to solve a generic case -
np.nonzero(B[:,None] == A)[1]
Sample run -
In [125]: A
Out[125]: array([ 7, 5, 1, 6, 10, 9, 8])
In [126]: B
Out[126]: array([ 1, 10, 7])
In [127]: sort_idx = A.argsort()
In [128]: sort_idx[np.searchsorted(A,B,sorter = sort_idx)]
Out[128]: array([2, 4, 0])
In [129]: np.nonzero(B[:,None] == A)[1]
Out[129]: array([2, 4, 0])
Have you tried searchsorted?
A = np.array([1,2,3,4,5,6,7,8,9,10])
B = np.array([1,7,10])
A.searchsorted(B)
# array([0, 6, 9])
Just for completeness: If the values in A are non negative and reasonably small:
lookup = np.empty((np.max(A) + 1), dtype=int)
lookup[A] = np.arange(len(A))
indices = lookup[B]
I had the same question these days. However, the timing performance is very critical for me. Therefore, I guess the timing comparison of different solutions may be useful for others.
As Divakar mentioned, you can use np.in1d(A, B) with np.where, np.nonzero. Moreover, you can use the np.in1d(A, B) with np.intersect1d (based on this page). Also, you can use np.searchsorted as another useful approach for sorted arrays.
I want to add another simple solution. You can use the comprehension list. It may take longer that the previous ones. However, if you take the advantage of Numba python package, it is much less time-consuming.
In [1]: import numpy as np
In [2]: from numba import njit
In [3]: a = np.array([1,2,3,4,5,6,7,8,9,10])
In [4]: b = np.array([1,7,10])
In [5]: np.where(np.in1d(a, b))[0]
...: array([0, 6, 9])
In [6]: np.nonzero(np.in1d(a, b))[0]
...: array([0, 6, 9])
In [7]: np.searchsorted(a, b)
...: array([0, 6, 9])
In [8]: np.searchsorted(a, np.intersect1d(a, b))
...: array([0, 6, 9])
In [9]: [i for i, x in enumerate(a) if x in b]
...: [0, 6, 9]
In [10]: #njit
...: def func(a, b):
...: return [i for i, x in enumerate(a) if x in b]
In [11]: func(a, b)
...: [0, 6, 9]
Now, let's compare the timing performance of these solutions.
In [12]: %timeit np.where(np.in1d(a, b))[0]
4.26 µs ± 6.9 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [13]: %timeit np.nonzero(np.in1d(a, b))[0]
4.39 µs ± 14.3 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [14]: %timeit np.searchsorted(a, b)
800 ns ± 6.04 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
In [15]: %timeit np.searchsorted(a, np.intersect1d(a, b))
8.8 µs ± 73.9 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [16]: %timeit [i for i, x in enumerate(a) if x in b]
15.4 µs ± 18.4 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [17]: %timeit func(a, b)
336 ns ± 0.579 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
I'm having a hard time trying to solve this problem, the main issue is I'm running a simulation, so for lops are mainly forbidden, I have a numpy array NxN, in this case mine is about (10000x20).
stoploss = 19.9 # condition to apply
monte_carlo_simulation(20,1.08,10000,20) #which gives me that 10000x20 np array
mask_trues = np.where(np.any((simulation <= stoploss) == True, axis=1)) # boolean mask
I need some code to make a new vector of len(10000) which returns an array with all the positions for every row, lets suppose:
function([[False,True,True],[False,False,True]])
output = [[1,2],[2]]
Again, the main problem resides in not using loops.
Simply this:
list(map(np.where, my_array))
performance comparison against Kasrâmvd's solution:
def f(a):
return list(map(np.where, a))
def g(a):
x, y = np.where(a)
return np.split(y, np.where(np.diff(x) != 0)[0] + 1)
a = np.random.randint(2, size=(10000,20))
%timeit f(a)
%timeit g(a)
7.66 ms ± 38.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
13.3 ms ± 188 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
For completeness I'll demonstrate a sparse matrix approach:
In [57]: A = np.array([[False,True,True],[False,False,True]])
In [58]: A
Out[58]:
array([[False, True, True],
[False, False, True]])
In [59]: M = sparse.lil_matrix(A)
In [60]: M
Out[60]:
<2x3 sparse matrix of type '<class 'numpy.bool_'>'
with 3 stored elements in LInked List format>
In [61]: M.data
Out[61]: array([list([True, True]), list([True])], dtype=object)
In [62]: M.rows
Out[62]: array([list([1, 2]), list([2])], dtype=object)
And to make a large sparse one:
In [63]: BM = sparse.random(10000,20,.05, 'lil')
In [64]: BM
Out[64]:
<10000x20 sparse matrix of type '<class 'numpy.float64'>'
with 10000 stored elements in LInked List format>
In [65]: BM.rows
Out[65]:
array([list([3]), list([]), list([6, 15]), ..., list([]), list([11]),
list([])], dtype=object)
Rough time tests:
In [66]: arr = BM.A
In [67]: timeit sparse.lil_matrix(arr)
19.5 ms ± 421 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [68]: timeit list(map(np.where,arr))
11 ms ± 55.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [69]: %%timeit
...: x,y = np.where(arr)
...: np.split(y, np.where(np.diff(x) != 0)[0] + 1)
...:
13.8 ms ± 24.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Generating a csr sparse format matrix is faster:
In [70]: timeit sparse.csr_matrix(arr)
2.68 ms ± 120 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [71]: Mr = sparse.csr_matrix(arr)
In [72]: Mr.indices
Out[72]: array([ 3, 6, 15, ..., 8, 16, 11], dtype=int32)
In [73]: Mr.indptr
Out[73]: array([ 0, 1, 1, ..., 9999, 10000, 10000], dtype=int32)
In [74]: np.where(arr)[1]
Out[74]: array([ 3, 6, 15, ..., 8, 16, 11])
It's indices is just like the column where, while the indptr is like the split indices.
Here is one way using np.split() and np.diff():
x, y = np.where(boolean_array)
np.split(y, np.where(np.diff(x) != 0)[0] + 1)
Demo:
In [12]: a = np.array([[False,True,True],[False,False,True]])
In [13]: x, y = np.where(a)
In [14]: np.split(y, np.where(np.diff(x) != 0)[0] + 1)
Out[14]: [array([1, 2]), array([2])]
This question already has answers here:
Indexing one array by another in numpy
(4 answers)
Closed 4 years ago.
Say I have a NumPy array:
>>> X = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]])
>>> X
array([[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9, 10, 11, 12]])
and an array of indexes that I want to select for each row:
>>> ixs = np.array([[1, 3], [0, 1], [1, 2]])
>>> ixs
array([[1, 3],
[0, 1],
[1, 2]])
How do I index the array X so that for every row in X I select the two indices specified in ixs?
So for this case, I want to select element 1 and 3 for the first row, element 0 and 1 for the second row, and so on. The output should be:
array([[2, 4],
[5, 6],
[10, 11]])
A slow solution would be something like this:
output = np.array([row[ix] for row, ix in zip(X, ixs)])
however this can get kinda slow for extremely long arrays. Is there a faster way to do this without a loop using NumPy?
EDIT: Some very approximate speed tests on a 2.5K * 1M array with 2K wide ixs (10GB):
np.array([row[ix] for row, ix in zip(X, ixs)]) 0.16s
X[np.arange(len(ixs)), ixs.T].T 0.175s
X.take(idx+np.arange(0, X.shape[0]*X.shape[1], X.shape[1])[:,None]) 33s
np.fromiter((X[i, j] for i, row in enumerate(ixs) for j in row), dtype=X.dtype).reshape(ixs.shape) 2.4s
You can use this:
X[np.arange(len(ixs)), ixs.T].T
Here is the reference for complex indexing.
I believe you can use .take thusly:
In [185]: X
Out[185]:
array([[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9, 10, 11, 12]])
In [186]: idx
Out[186]:
array([[1, 3],
[0, 1],
[1, 2]])
In [187]: X.take(idx + (np.arange(X.shape[0]) * X.shape[1]).reshape(-1, 1))
Out[187]:
array([[ 2, 4],
[ 5, 6],
[10, 11]])
If your array dimensions are massive, it might be faster, albeit uglier, to do:
idx+np.arange(0, X.shape[0]*X.shape[1], X.shape[1])[:,None]
Just for fun, see how the following performs:
np.fromiter((X[i, j] for i, row in enumerate(ixs) for j in row), dtype=X.dtype, count=ixs.size).reshape(ixs.shape)
Edit to add timings
In [15]: X = np.arange(1000*10000, dtype=np.int32).reshape(1000,-1)
In [16]: ixs = np.random.randint(0, 10000, (1000, 2))
In [17]: ixs.sort(axis=1)
In [18]: ixs
Out[18]:
array([[2738, 3511],
[3600, 7414],
[7426, 9851],
...,
[1654, 8252],
[2194, 8200],
[5497, 8900]])
In [19]: %timeit np.array([row[ix] for row, ix in zip(X, ixs)])
928 µs ± 23.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [20]: %timeit X[np.arange(len(ixs)), ixs.T].T
23.6 µs ± 491 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [21]: %timeit X.take(idx+np.arange(0, X.shape[0]*X.shape[1], X.shape[1])[:,None])
20.6 µs ± 530 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [22]: %timeit np.fromiter((X[i, j] for i, row in enumerate(ixs) for j in row), dtype=X.dtype, count=ixs.size).reshape(ixs.shape)
1.42 ms ± 9.94 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
#mxbi I've added some timings and my results aren't really consistent with yours, you should check it out
Here's a larger array:
In [33]: X = np.arange(10000*100000, dtype=np.int32).reshape(10000,-1)
In [34]: ixs = np.random.randint(0, 100000, (10000, 2))
In [35]: ixs.sort(axis=1)
In [36]: X.shape
Out[36]: (10000, 100000)
In [37]: ixs.shape
Out[37]: (10000, 2)
With some results:
In [42]: %timeit np.array([row[ix] for row, ix in zip(X, ixs)])
11.4 ms ± 177 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [43]: %timeit X[np.arange(len(ixs)), ixs.T].T
596 µs ± 17.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [44]: %timeit X.take(ixs+np.arange(0, X.shape[0]*X.shape[1], X.shape[1])[:,None])
540 µs ± 16.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Now, we are using column 500 indices instead of two, and we see the list-comprehension start winning out:
In [45]: ixs = np.random.randint(0, 100000, (10000, 500))
In [46]: ixs.sort(axis=1)
In [47]: %timeit np.array([row[ix] for row, ix in zip(X, ixs)])
93 ms ± 1.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [48]: %timeit X[np.arange(len(ixs)), ixs.T].T
133 ms ± 638 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [49]: %timeit X.take(ixs+np.arange(0, X.shape[0]*X.shape[1], X.shape[1])[:,None])
87.5 ms ± 1.13 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
The usual suggestion for indexing items from rows is:
X[np.arange(X.shape[0])[:,None], ixs]
That is, make a row index of shape (n,1) (column vector), which will broadcast with the (n,m) shape of ixs to give a (n,m) solution.
This basically the same as:
X[np.arange(len(ixs)), ixs.T].T
which broadcasts a (n,) index against a (m,n), and transposes.
Timings are essentially the same:
In [299]: X = np.ones((1000,2000))
In [300]: ixs = np.random.randint(0,2000,(1000,200))
In [301]: timeit X[np.arange(len(ixs)), ixs.T].T
6.58 ms ± 71.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [302]: timeit X[np.arange(X.shape[0])[:,None], ixs]
6.57 ms ± 129 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
and for comparison:
In [307]: timeit np.array([row[ix] for row, ix in zip(X, ixs)])
6.63 ms ± 229 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
I'm a little surprised that this list comprehension does so well. I wonder how the relative advantages compare when the dimensions change, particularly in the relative shape of X and ixs (long, wide etc).
The first solution is the style of indexing produced by ix_:
In [303]: np.ix_(np.arange(3), np.arange(2))
Out[303]:
(array([[0],
[1],
[2]]), array([[0, 1]]))
This should work
[X[i][[y]] for i, y in enumerate(ixs)]
EDIT: I just noticed you wanted no loop solution.