I'm using numpy einsum to calculate the dot products of an array of column vectors pts, of shape (3,N), with itself, resulting on a matrix dotps, of shape (N,N), with all the dot products. This is the code I use:
dotps = np.einsum('ij,ik->jk', pts, pts)
This works, but I only need the values above the main diagonal. ie. the upper triangular part of the result without the diagonal. Is it possible to compute only these values with einsum? or in any other way that is faster than using einsum to compute the whole matrix?
My pts array can be quite large so if I could calculate only the values I need that would double my computation speed.
You can slice relevant columns and then use np.einsum -
R,C = np.triu_indices(N,1)
out = np.einsum('ij,ij->j',pts[:,R],pts[:,C])
Sample run -
In [109]: N = 5
...: pts = np.random.rand(3,N)
...: dotps = np.einsum('ij,ik->jk', pts, pts)
...:
In [110]: dotps
Out[110]:
array([[ 0.26529103, 0.30626052, 0.18373867, 0.13602931, 0.51162729],
[ 0.30626052, 0.56132272, 0.5938057 , 0.28750708, 0.9876753 ],
[ 0.18373867, 0.5938057 , 0.84699103, 0.35788749, 1.04483158],
[ 0.13602931, 0.28750708, 0.35788749, 0.18274288, 0.4612556 ],
[ 0.51162729, 0.9876753 , 1.04483158, 0.4612556 , 1.82723949]])
In [111]: R,C = np.triu_indices(N,1)
...: out = np.einsum('ij,ij->j',pts[:,R],pts[:,C])
...:
In [112]: out
Out[112]:
array([ 0.30626052, 0.18373867, 0.13602931, 0.51162729, 0.5938057 ,
0.28750708, 0.9876753 , 0.35788749, 1.04483158, 0.4612556 ])
Optimizing further -
Let's time our approach and see if there's any scope for improvement performance-wise.
In [126]: N = 5000
In [127]: pts = np.random.rand(3,N)
In [128]: %timeit np.triu_indices(N,1)
1 loops, best of 3: 413 ms per loop
In [129]: R,C = np.triu_indices(N,1)
In [130]: %timeit np.einsum('ij,ij->j',pts[:,R],pts[:,C])
1 loops, best of 3: 1.47 s per loop
Staying within the memory constraints, it doesn't look like we can do much about optimizing np.einsum. So, let's shift the focus to np.triu_indices.
For N = 4, we have :
In [131]: N = 4
In [132]: np.triu_indices(N,1)
Out[132]: (array([0, 0, 0, 1, 1, 2]), array([1, 2, 3, 2, 3, 3]))
It seems to be creating a regular pattern, sort of like a shifting one though. This could be written with a cumulative sum that has shifts at those 3 and 5 positions. Thinking generically, we would end up coding it something like this -
def triu_indices_cumsum(N):
# Length of R and C index arrays
L = (N*(N-1))/2
# Positions along the R and C arrays that indicate
# shifting to the next row of the full array
shifts_idx = np.arange(2,N)[::-1].cumsum()
# Initialize "shift" arrays for finally leading to R and C
shifts1_arr = np.zeros(L,dtype=int)
shifts2_arr = np.ones(L,dtype=int)
# At shift positions along the shifts array set appropriate values,
# such that when cumulative summed would lead to desired R and C arrays.
shifts1_arr[shifts_idx] = 1
shifts2_arr[shifts_idx] = -np.arange(N-2)[::-1]
# Finall cumsum to give R, C
R_arr = shifts1_arr.cumsum()
C_arr = shifts2_arr.cumsum()
return R_arr, C_arr
Let's time it for various N's!
In [133]: N = 100
In [134]: %timeit np.triu_indices(N,1)
10000 loops, best of 3: 122 µs per loop
In [135]: %timeit triu_indices_cumsum(N)
10000 loops, best of 3: 61.7 µs per loop
In [136]: N = 1000
In [137]: %timeit np.triu_indices(N,1)
100 loops, best of 3: 17 ms per loop
In [138]: %timeit triu_indices_cumsum(N)
100 loops, best of 3: 16.3 ms per loop
Thus, it looks like for decent N's, the customized cumsum based triu_indices might be worth a look!
Related
Say I have two arrays, A and B.
An element wise multiplication is defined as follows:
I want to do an element-wise multiplication in a convolutional-like manner, i.e., move every column one step right, for example, column 1 will be now column 2 and column 3 will be now column 1.
This should yield a ( 2 by 3 by 3 ) array (2x3 matrix for all 3 possibilities)
We can concatenate A with one of it's own slice and then get those sliding windows. To get those windows, we can leverage np.lib.stride_tricks.as_strided based scikit-image's view_as_windows. Then, multiply those windows with B for the final output. More info on use of as_strided based view_as_windows.
Hence, we will have one vectorized solution like so -
In [70]: from skimage.util.shape import view_as_windows
In [71]: A1 = np.concatenate((A,A[:,:-1]),axis=1)
In [74]: view_as_windows(A1,A.shape)[0]*B
Out[74]:
array([[[1, 0, 3],
[0, 0, 6]],
[[2, 0, 1],
[0, 0, 4]],
[[3, 0, 2],
[0, 0, 5]]])
We can also leverage multi-cores with numexpr module for the final step of broadcasted-multiplication, which should be better on larger arrays. Hence, for the sample case, it would be -
In [53]: import numexpr as ne
In [54]: w = view_as_windows(A1,A.shape)[0]
In [55]: ne.evaluate('w*B')
Out[55]:
array([[[1, 0, 3],
[0, 0, 6]],
[[2, 0, 1],
[0, 0, 4]],
[[3, 0, 2],
[0, 0, 5]]])
Timings on large arrays comparing the proposed two methods -
In [56]: A = np.random.rand(500,500)
...: B = np.random.rand(500,500)
In [57]: A1 = np.concatenate((A,A[:,:-1]),axis=1)
...: w = view_as_windows(A1,A.shape)[0]
In [58]: %timeit w*B
...: %timeit ne.evaluate('w*B')
1 loop, best of 3: 422 ms per loop
1 loop, best of 3: 228 ms per loop
Squeezing out the best off strided-based method
If you really squeeze out the best off the strided-view-based approach, go with the original np.lib.stride_tricks.as_strided based one to avoid the functional overhead off view_as_windows -
def vaw_with_as_strided(A,B):
A1 = np.concatenate((A,A[:,:-1]),axis=1)
s0,s1 = A1.strides
S = (A.shape[1],)+A.shape
w = np.lib.stride_tricks.as_strided(A1,shape=S,strides=(s1,s0,s1))
return w*B
Comparing against #Paul Panzer's array-assignment based one, the crossover seems to be at 19x19 shaped arrays -
In [33]: n = 18
...: A = np.random.rand(n,n)
...: B = np.random.rand(n,n)
In [34]: %timeit vaw_with_as_strided(A,B)
...: %timeit pp(A,B)
10000 loops, best of 3: 22.4 µs per loop
10000 loops, best of 3: 21.4 µs per loop
In [35]: n = 19
...: A = np.random.rand(n,n)
...: B = np.random.rand(n,n)
In [36]: %timeit vaw_with_as_strided(A,B)
...: %timeit pp(A,B)
10000 loops, best of 3: 24.5 µs per loop
10000 loops, best of 3: 24.5 µs per loop
So, for anything smaller than 19x19, array-assignment seems to be better and for larger than those, strided-based one should be the way to go.
Just a note on view_as_windows/as_strided. Neat as these functions are, it is useful to know that they have a rather pronounced constant overhead. Here is comparison between #Divakar's view_as_windows based solution (vaw) and a copy-reshape based approach by me.
As you can see vaw is not very fast on small to medium sized operands and only begins to shine above array size 30x30.
Code:
from simple_benchmark import BenchmarkBuilder, MultiArgument
import numpy as np
from skimage.util.shape import view_as_windows
B = BenchmarkBuilder()
#B.add_function()
def vaw(A,B):
A1 = np.concatenate((A,A[:,:-1]),axis=1)
w = view_as_windows(A1,A.shape)[0]
return w*B
#B.add_function()
def pp(A,B):
m,n = A.shape
aux = np.empty((n,m,2*n),A.dtype)
AA = np.concatenate([A,A],1)
aux.reshape(-1)[:-n].reshape(n,-1)[...] = AA.reshape(-1)[:-1]
return aux[...,:n]*B
#B.add_arguments('array size')
def argument_provider():
for exp in range(4, 16):
dim_size = int(1.4**exp)
a = np.random.rand(dim_size,dim_size)
b = np.random.rand(dim_size,dim_size)
yield dim_size, MultiArgument([a,b])
r = B.run()
r.plot()
import pylab
pylab.savefig('vaw.png')
Run a for loop for the number of columns and use np.roll() around axis =1, to shift your columns and do the matrix multiplication.
refer to the accepted answer in this reference.
Hope this helps.
I can actually pad the array from its two sides with 2 columns (to get 2x5 array)
and run a conv2 with 'b' as a kernel, I think it's more efficient
I have a large numpy array. Is there a way to subtract each element with the elements below it, and store the result in a new list/array, without using a loop.
A simple example of what I mean:
a = numpy.array([4,3,2,1])
result = [4-3, 4-2, 4-1, 3-2, 3-1, 2-1] = [1, 2, 3, 1, 2 ,1]
Note that the 'real' array I am working with doesn't contain numbers in sequence. This is just to make the example simple.
I know the result should have (n-1)! elements, where n is the size of the array.
Is there a way to do this without using a loop, but by repeating the array in a 'smart' way?
Thanks!
temp = a[:, None] - a
result = temp[np.triu_indices(len(a), k=1)]
Perform all pairwise subtractions to produce temp, including subtracting elements from themselves and subtracting earlier elements from later elements, then use triu_indices to select the results we want. (a[:, None] adds an extra length-1 axis to a.)
Note that almost all of the runtime is spent constructing result from temp (because triu_indices is slow and using indices to select the upper triangle of an array is slow). If you can use temp directly, you can save a lot of time:
In [13]: a = numpy.arange(2000)
In [14]: %%timeit
....: temp = a[:, None] - a
....:
100 loops, best of 3: 6.99 ms per loop
In [15]: %%timeit
....: temp = a[:, None] - a
....: result = temp[numpy.triu_indices(len(a), k=1)]
....:
10 loops, best of 3: 51.7 ms per loop
Here's a masking based approach for the extraction after broadcasted subtractions and for the mask creation we are again making use of broadcasting (double broadcasting powered so to speak) -
r = np.arange(a.size)
out = (a[:, None] - a)[r[:,None] < r]
Runtime test
Vectorized approaches -
# #user2357112's solution
def pairwise_diff_triu_indices_based(a):
return (a[:, None] - a)[np.triu_indices(len(a), k=1)]
# Proposed in this post
def pairwise_diff_masking_based(a):
r = np.arange(a.size)
return (a[:, None] - a)[r[:,None] < r]
Timings -
In [109]: a = np.arange(2000)
In [110]: %timeit pairwise_diff_triu_indices_based(a)
10 loops, best of 3: 36.1 ms per loop
In [111]: %timeit pairwise_diff_masking_based(a)
100 loops, best of 3: 11.8 ms per loop
Closer look at involved performance parameters
Let's dig deep a bit through the timings on this setup to study how much mask based approach helps. Now, for comparison there are two parts - Mask creation vs. indices creation and Mask based boolean indexing vs. integer based indexing.
How much mask creation helps?
In [37]: r = np.arange(a.size)
In [38]: %timeit np.arange(a.size)
1000000 loops, best of 3: 1.88 µs per loop
In [39]: %timeit r[:,None] < r
100 loops, best of 3: 3 ms per loop
In [40]: %timeit np.triu_indices(len(a), k=1)
100 loops, best of 3: 14.7 ms per loop
About 5x improvement on mask creation over index setup.
How much boolean indexing helps against integer based indexing?
In [41]: mask = r[:,None] < r
In [42]: idx = np.triu_indices(len(a), k=1)
In [43]: subs = a[:, None] - a
In [44]: %timeit subs[mask]
100 loops, best of 3: 4.15 ms per loop
In [45]: %timeit subs[idx]
100 loops, best of 3: 10.9 ms per loop
About 2.5x improvement here.
a = [4, 3, 2, 1]
differences = ((x - y) for i, x in enumerate(a) for y in a[i+1:])
for diff in differences:
# do something with difference.
pass
Check out itertools.combinations:
from itertools import combinations
l = [4, 3, 2, 1]
result = []
for n1, n2 in combinations(l, 2):
result.append(n1 - n2)
print result
Results in:
[1, 2, 3, 1, 2, 1]
combinations returns a generator, so this is good for very large lists :)
Let say I have two large 2-d numpy array of same dimensions (say 2000x2000). I want to sum them element wise. I was wondering if there is a faster way than np.add()
Edit: I am adding a similar example of what I am using now. Is there a way to speed up this?
#a and b are the two matrices I already have.Dimension is 2000x2000
#shift is also a list that is previously known
for j in range(100000):
b=np.roll(b, shift[j] , axis=0)
a=np.add(a,b)
Approach #1 (Vectorized)
We can use modulus to simulate the circulating behavior of roll/circshift and with broadcasted indices to cover all rows, we would have a fully vectorized approach, like so -
n = b.shape[0]
idx = n-1 - np.mod(shift.cumsum()[:,None]-1 - np.arange(n), n)
a += b[idx].sum(0)
Approach #2 (Loopy one)
b_ext = np.row_stack((b, b[:-1] ))
start_idx = n-1 - np.mod(shift.cumsum()-1,n)
for j in range(start_idx.size):
a += b_ext[start_idx[j]:start_idx[j]+n]
Colon notation vs using indices for slicing
The idea here to do minimal work once we are inside the loop. We are pre-computing the start row index of each iteration before going into the loop. So, all we need to do once inside the loop is slicing using colon notation, which is a view into the array and adding up. This should be much better than rolling that needs to compute all of those row indices that results in a copy that is expensive.
Here's a bit more into the view and copy concepts when slicing with colon and indices -
In [11]: a = np.random.randint(0,9,(10))
In [12]: a
Out[12]: array([8, 0, 1, 7, 5, 0, 6, 1, 7, 0])
In [13]: a[3:8]
Out[13]: array([7, 5, 0, 6, 1])
In [14]: a[[3,4,5,6,7]]
Out[14]: array([7, 5, 0, 6, 1])
In [15]: np.may_share_memory(a, a[3:8])
Out[15]: True
In [16]: np.may_share_memory(a, a[[3,4,5,6,7]])
Out[16]: False
Runtime test
Function defintions -
def original_loopy_app(a,b):
for j in range(shift.size):
b=np.roll(b, shift[j] , axis=0)
a += b
def vectorized_app(a,b):
n = b.shape[0]
idx = n-1 - np.mod(shift.cumsum()[:,None]-1 - np.arange(n), n)
a += b[idx].sum(0)
def modified_loopy_app(a,b):
n = b.shape[0]
b_ext = np.row_stack((b, b[:-1] ))
start_idx = n-1 - np.mod(shift.cumsum()-1,n)
for j in range(start_idx.size):
a += b_ext[start_idx[j]:start_idx[j]+n]
Case #1:
In [5]: # Setup input arrays
...: N = 200
...: M = 1000
...: a = np.random.randint(11,99,(N,N))
...: b = np.random.randint(11,99,(N,N))
...: shift = np.random.randint(0,N,M)
...:
In [6]: original_loopy_app(a1,b1)
...: vectorized_app(a2,b2)
...: modified_loopy_app(a3,b3)
...:
In [7]: np.allclose(a1, a2) # Verify results
Out[7]: True
In [8]: np.allclose(a1, a3) # Verify results
Out[8]: True
In [9]: %timeit original_loopy_app(a1,b1)
...: %timeit vectorized_app(a2,b2)
...: %timeit modified_loopy_app(a3,b3)
...:
10 loops, best of 3: 107 ms per loop
10 loops, best of 3: 137 ms per loop
10 loops, best of 3: 48.2 ms per loop
Case #2:
In [13]: # Setup input arrays (datasets are exactly 1/10th of original sizes)
...: N = 200
...: M = 10000
...: a = np.random.randint(11,99,(N,N))
...: b = np.random.randint(11,99,(N,N))
...: shift = np.random.randint(0,N,M)
...:
In [14]: %timeit original_loopy_app(a1,b1)
...: %timeit modified_loopy_app(a3,b3)
...:
1 loops, best of 3: 1.11 s per loop
1 loops, best of 3: 481 ms per loop
So, we are looking at 2x+ speedup there with the modified loopy approach!
I've got a numpy array of strictly increasing "cutoff" values of length m, and a pandas series of values (thought the index isn't important and this could be cast to a numpy array) of values of length n.
I need to come up with an efficient way of spitting out a length m vector of counts of the number of elements in the pandas series less than the jth element of the "cutoff" array.
I could do this via a list iterator:
output = array([(pan_series < cutoff_val).sum() for cutoff_val in cutoff_ar])
but I was wondering if there were any way to do this that leveraged more of numpy's magic speed, as I have to do this quite a few times inside multiple loops and it keeps crasshing my computer.
Thanks!
Is this what you are looking for?
In [36]: a = np.random.random(20)
In [37]: a
Out[37]:
array([ 0.68574307, 0.15743428, 0.68006876, 0.63572484, 0.26279663,
0.14346269, 0.56267286, 0.47250091, 0.91168387, 0.98915746,
0.22174062, 0.11930722, 0.30848231, 0.1550406 , 0.60717858,
0.23805205, 0.57718675, 0.78075297, 0.17083826, 0.87301963])
In [38]: b = np.array((0.3,0.7))
In [39]: np.sum(a[:,None]<b[None,:], axis=0)
Out[39]: array([ 8, 16])
In [40]: np.sum(a[:,None]<b, axis=0) # b's new axis above is unnecessary...
Out[40]: array([ 8, 16])
In [41]: (a[:,None]<b).sum(axis=0) # even simpler
Out[41]: array([ 8, 16])
Timings are always well received (for a longish, 2E6 elements array)
In [47]: a = np.random.random(2000000)
In [48]: %timeit (a[:,None]<b).sum(axis=0)
10 loops, best of 3: 78.2 ms per loop
In [49]: %timeit np.searchsorted(a, b, 'right',sorter=a.argsort())
1 loop, best of 3: 448 ms per loop
For a smaller array
In [50]: a = np.random.random(2000)
In [51]: %timeit (a[:,None]<b).sum(axis=0)
10000 loops, best of 3: 89 µs per loop
In [52]: %timeit np.searchsorted(a, b, 'right',sorter=a.argsort())
The slowest run took 4.86 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 141 µs per loop
Edit
Divakar says that things may be different for lenghty bs, let's see
In [71]: a = np.random.random(2000)
In [72]: b =np.random.random(200)
In [73]: %timeit (a[:,None]<b).sum(axis=0)
1000 loops, best of 3: 1.44 ms per loop
In [74]: %timeit np.searchsorted(a, b, 'right',sorter=a.argsort())
10000 loops, best of 3: 172 µs per loop
quite different indeed! Thank you for prompting my curiosity.
Probably the OP should test for his use case, very long sample with respect to cutoff sequences or not? and where there is a balance?
Edit #2
I made a blooper in my timings, I forgot the axis=0 argument to .sum()...
I've edited the timings with the corrected statement and, of course, the corrected timing. My apologies.
You can use np.searchsorted for some NumPy magic -
# Convert to numpy array for some "magic"
pan_series_arr = np.array(pan_series)
# Let the magic begin!
sortidx = pan_series_arr.argsort()
out = np.searchsorted(pan_series_arr,cutoff_ar,'right',sorter=sortidx)
Explanation
You are performing [(pan_series < cutoff_val).sum() for cutoff_val in cutoff_ar] i.e. for each
element in cutoff_ar, we are counting the number of pan_series elements that are lesser than it. Now with np.searchsorted, we are looking for cutoff_ar to be put in a sorted pan_series_arr and get the indices of such positions compared to whom the current element in cutoff_ar is at 'right' position . These indices essentially represent the number of pan_series elements below the current cutoff_ar element, thus giving us our desired output.
Sample run
In [302]: cutoff_ar
Out[302]: array([ 1, 3, 9, 44, 63, 90])
In [303]: pan_series_arr
Out[303]: array([ 2, 8, 69, 55, 97])
In [304]: [(pan_series_arr < cutoff_val).sum() for cutoff_val in cutoff_ar]
Out[304]: [0, 1, 2, 2, 3, 4]
In [305]: sortidx = pan_series_arr.argsort()
...: out = np.searchsorted(pan_series_arr,cutoff_ar,'right',sorter=sortidx)
...:
In [306]: out
Out[306]: array([0, 1, 2, 2, 3, 4])
For example, I have two ndarrays, the shape of train_dataset is (10000, 28, 28) and the shape of val_dateset is (2000, 28, 28).
Except for using iterations, is there any efficient way to use the numpy array functions to find the overlap between two ndarrays?
One trick I learnt from Jaime's excellent answer here is to use an np.void dtype in order to view each row in the input arrays as a single element. This allows you to treat them as 1D arrays, which can then be passed to np.in1d or one of the other set routines.
import numpy as np
def find_overlap(A, B):
if not A.dtype == B.dtype:
raise TypeError("A and B must have the same dtype")
if not A.shape[1:] == B.shape[1:]:
raise ValueError("the shapes of A and B must be identical apart from "
"the row dimension")
# reshape A and B to 2D arrays. force a copy if neccessary in order to
# ensure that they are C-contiguous.
A = np.ascontiguousarray(A.reshape(A.shape[0], -1))
B = np.ascontiguousarray(B.reshape(B.shape[0], -1))
# void type that views each row in A and B as a single item
t = np.dtype((np.void, A.dtype.itemsize * A.shape[1]))
# use in1d to find rows in A that are also in B
return np.in1d(A.view(t), B.view(t))
For example:
gen = np.random.RandomState(0)
A = gen.randn(1000, 28, 28)
dupe_idx = gen.choice(A.shape[0], size=200, replace=False)
B = A[dupe_idx]
A_in_B = find_overlap(A, B)
print(np.all(np.where(A_in_B)[0] == np.sort(dupe_idx)))
# True
This method is much more memory-efficient than Divakar's, since it doesn't require broadcasting out to an (m, n, ...) boolean array. In fact, if A and B are row-major then no copying is required at all.
For comparison I've slightly adapted Divakar and B. M.'s solutions.
def divakar(A, B):
A.shape = A.shape[0], -1
B.shape = B.shape[0], -1
return (B[:,None] == A).all(axis=(2)).any(0)
def bm(A, B):
t = 'S' + str(A.size // A.shape[0] * A.dtype.itemsize)
ma = np.frombuffer(np.ascontiguousarray(A), t)
mb = np.frombuffer(np.ascontiguousarray(B), t)
return (mb[:, None] == ma).any(0)
Benchmarks:
In [1]: na = 1000; nb = 200; rowshape = 28, 28
In [2]: %%timeit A = gen.randn(na, *rowshape); idx = gen.choice(na, size=nb, replace=False); B = A[idx]
divakar(A, B)
....:
1 loops, best of 3: 244 ms per loop
In [3]: %%timeit A = gen.randn(na, *rowshape); idx = gen.choice(na, size=nb, replace=False); B = A[idx]
bm(A, B)
....:
100 loops, best of 3: 2.81 ms per loop
In [4]: %%timeit A = gen.randn(na, *rowshape); idx = gen.choice(na, size=nb, replace=False); B = A[idx]
find_overlap(A, B)
....:
100 loops, best of 3: 15 ms per loop
As you can see, B. M.'s solution is slightly faster than mine for small n, but np.in1d scales better than testing equality for all elements (O(n log n) rather than O(n²) complexity).
In [5]: na = 10000; nb = 2000; rowshape = 28, 28
In [6]: %%timeit A = gen.randn(na, *rowshape); idx = gen.choice(na, size=nb, replace=False); B = A[idx]
bm(A, B)
....:
1 loops, best of 3: 271 ms per loop
In [7]: %%timeit A = gen.randn(na, *rowshape); idx = gen.choice(na, size=nb, replace=False); B = A[idx]
find_overlap(A, B)
....:
10 loops, best of 3: 123 ms per loop
Divakar's solution is intractable on my laptop for arrays of this size, since it requires generating a 15GB intermediate array whereas I only have 8GB RAM.
Memory permitting you could use broadcasting, like so -
val_dateset[(train_dataset[:,None] == val_dateset).all(axis=(2,3)).any(0)]
Sample run -
In [55]: train_dataset
Out[55]:
array([[[1, 1],
[1, 1]],
[[1, 0],
[0, 0]],
[[0, 0],
[0, 1]],
[[0, 1],
[0, 0]],
[[1, 1],
[1, 0]]])
In [56]: val_dateset
Out[56]:
array([[[0, 1],
[1, 0]],
[[1, 1],
[1, 1]],
[[0, 0],
[0, 1]]])
In [57]: val_dateset[(train_dataset[:,None] == val_dateset).all(axis=(2,3)).any(0)]
Out[57]:
array([[[1, 1],
[1, 1]],
[[0, 0],
[0, 1]]])
If the elements are integers, you could collapse every block of axis=(1,2) in the input arrays into a scalar assuming them as linearly index-able numbers and then efficiently use np.in1d or np.intersect1d to find the matches.
Full broadcasting generate here a 10000*2000*28*28 =150 Mo boolean array.
For efficiency, you can :
pack data, for a 200 ko array:
from pylab import *
N=10000
a=rand(N,28,28)
b=a[[randint(0,N,N//5)]]
packedtype='S'+ str(a.size//a.shape[0]*a.dtype.itemsize) # 'S6272'
ma=frombuffer(a,packedtype) # ma.shape=10000
mb=frombuffer(b,packedtype) # mb.shape=2000
%timeit a[:,None]==b : 102 s
%timeit ma[:,None]==mb : 800 ms
allclose((a[:,None]==b).all((2,3)),(ma[:,None]==mb)) : True
less memory is helped here by lazy string comparison, breaking at first difference :
In [31]: %timeit a[:100]==b[:100]
10000 loops, best of 3: 175 µs per loop
In [32]: %timeit a[:100]==a[:100]
10000 loops, best of 3: 133 µs per loop
In [34]: %timeit ma[:100]==mb[:100]
100000 loops, best of 3: 7.55 µs per loop
In [35]: %timeit ma[:100]==ma[:100]
10000 loops, best of 3: 156 µs per loop
Solutions are given here with (ma[:,None]==mb).nonzero().
use in1d, for a (Na+Nb) ln(Na+Nb) complexity, against
Na*Nb on full comparison :
%timeit in1d(ma,mb).nonzero() : 590ms
Not a big gain here, but asymptotically better.
Solution
def overlap(a,b):
"""
returns a boolean index array for input array b representing
elements in b that are also found in a
"""
a.repeat(b.shape[0],axis=0)
b.repeat(a.shape[0],axis=0)
c = aa == bb
c = c[::a.shape[0]]
return c.all(axis=1)[:,0]
You can use the returned index array to index b to extract the elements which are also found in a
b[overlap(a,b)]
Explanation
For simplicity's sake I assume you have imported everything from numpy for this example:
from numpy import *
So, for example, given two ndarrays
a = arange(4*2*2).reshape(4,2,2)
b = arange(3*2*2).reshape(3,2,2)
we repeat a and b so that they have the same shape
aa = a.repeat(b.shape[0],axis=0)
bb = b.repeat(a.shape[0],axis=0)
we can then simply compare the elements of aa and bb
c = aa == bb
Finally, to get the indices of the elements in b which are also found in a by looking at every 4th, or actually, every shape(a)[0]th element of c
cc == c[::a.shape[0]]
Finally, we extract an index array with only the elements where all elements in the sub-arrays are True
c.all(axis=1)[:,0]
In our example we get
array([True, True, True], dtype=bool)
To check, change the first element of b
b[0] = array([[50,60],[70,80]])
and we get
array([False, True, True], dtype=bool)
This question comes form Google's online deep learning course?
The following is my solution:
sum = 0 # number of overlapping rows
for i in range(val_dataset.shape[0]): # iterate over all rows of val_dataset
overlap = (train_dataset == val_dataset[i,:,:]).all(axis=1).all(axis=1).sum()
if overlap:
sum += 1
print(sum)
Automatic broadcasting is used instead of iteration. You may test the performance difference.