Selecting Random Windows from Multidimensional Numpy Array Rows - python

I have a large array where each row is a time series and thus needs to stay in order.
I want to select a random window of a given size for each row.
Example:
>>>import numpy as np
>>>arr = np.array(range(42)).reshape(6,7)
>>>arr
array([[ 0, 1, 2, 3, 4, 5, 6],
[ 7, 8, 9, 10, 11, 12, 13],
[14, 15, 16, 17, 18, 19, 20],
[21, 22, 23, 24, 25, 26, 27],
[28, 29, 30, 31, 32, 33, 34],
[35, 36, 37, 38, 39, 40, 41]])
>>># What I want to do:
>>>select_random_windows(arr, window_size=3)
array([[ 1, 2, 3],
[11, 12, 13],
[14, 15, 16],
[22, 23, 24],
[38, 39, 40]])
What an ideal solution would look like to me:
def select_random_windows(arr, window_size):
offsets = np.random.randint(0, arr.shape[0] - window_size, size = arr.shape[1])
return arr[:, offsets: offsets + window_size]
But unfortunately this does not work
What I'm going with right now is terribly slow:
def select_random_windows(arr, wndow_size):
result = []
offsets = np.random.randint(0, arr.shape[0]-window_size, size = arr.shape[1])
for row, offset in enumerate(start_indices):
result.append(arr[row][offset: offset + window_size])
return np.array(result)
Sure, I could do the same with a list comprehension (and get a minimal speed boost), but I was wondering wether there is some super smart numpy vectorized way to do this.

Here's one leveraging np.lib.stride_tricks.as_strided -
def random_windows_per_row_strided(arr, W=3):
idx = np.random.randint(0,arr.shape[1]-W+1, arr.shape[0])
strided = np.lib.stride_tricks.as_strided
m,n = arr.shape
s0,s1 = arr.strides
windows = strided(arr, shape=(m,n-W+1,W), strides=(s0,s1,s1))
return windows[np.arange(len(idx)), idx]
Runtime test on bigger array with 10,000 rows -
In [469]: arr = np.random.rand(100000,100)
# #Psidom's soln
In [470]: %timeit select_random_windows(arr, window_size=3)
100 loops, best of 3: 7.41 ms per loop
In [471]: %timeit random_windows_per_row_strided(arr, W=3)
100 loops, best of 3: 6.84 ms per loop
# #Psidom's soln
In [472]: %timeit select_random_windows(arr, window_size=30)
10 loops, best of 3: 26.8 ms per loop
In [473]: %timeit random_windows_per_row_strided(arr, W=30)
100 loops, best of 3: 9.65 ms per loop
# #Psidom's soln
In [474]: %timeit select_random_windows(arr, window_size=50)
10 loops, best of 3: 41.8 ms per loop
In [475]: %timeit random_windows_per_row_strided(arr, W=50)
100 loops, best of 3: 10 ms per loop

In the return statement, change the slicing to advanced indexing, also you need to fix the sampling code a little bit:
def select_random_windows(arr, window_size):
offsets = np.random.randint(0, arr.shape[1]-window_size+1, size=arr.shape[0])
return arr[np.arange(arr.shape[0])[:,None], offsets[:,None] + np.arange(window_size)]
select_random_windows(arr, 3)
#array([[ 4, 5, 6],
# [ 7, 8, 9],
# [17, 18, 19],
# [25, 26, 27],
# [31, 32, 33],
# [39, 40, 41]])

Related

Python: create a matrix with differing integers in each individual row-vector [duplicate]

I'm trying to create a 2d array (which is a six column and lots of rows) with numpy random choice with unique values between 1 and 50 for every row not all of the array
np.sort(np.random.choice(np.arange(1,50),size=(100,6),replace=False))
But this raises an error.
ValueError: Cannot take a larger sample than population when 'replace=False'
Is it possible to make this with an one liner without a loop
Edit
Okey i get the answer.
These are the results with jupyter %time cellmagic
##James' solution
np.stack([np.random.choice(np.arange(1,50),size=6,replace=False) for i in range(1_000_000)])
Wall time: 25.1 s
##Divakar's solution
np.random.rand(1_000_000, 50).argpartition(6,axis=1)[:,:6]+1
Wall time: 1.36 s
##CoryKramer's solution
np.array([np.random.choice(np.arange(1, 50), size=6, replace=False) for _ in range(1_000_000)])
Wall time: 25.5 s
I changed dtypes of np.empty and np.random.randint on #Paul Panzer's solution because it was not working on my pc.
3.6.0 |Anaconda custom (64-bit)| (default, Dec 23 2016, 11:57:41) [MSC v.1900 64 bit (AMD64)]
Fastest one is
def pp(n):
draw = np.empty((n, 6), dtype=np.int64)
# generating random numbers is expensive, so draw a large one and
# make six out of one
draw[:, 0] = np.random.randint(0, 50*49*48*47*46*45, (n,),dtype=np.uint64)
draw[:, 1:] = np.arange(50, 45, -1)
draw = np.floor_divide.accumulate(draw, axis=-1)
draw[:, :-1] -= draw[:, 1:] * np.arange(50, 45, -1)
# map the shorter ranges (:49, :48, :47) to the non-occupied
# positions; this amounts to incrementing for each number on the
# left that is not larger. the nasty bit: if due to incrementing
# new numbers on the left are "overtaken" then for them we also
# need to increment.
for i in range(1, 6):
coll = np.sum(draw[:, :i] <= draw[:, i, None], axis=-1)
collidx = np.flatnonzero(coll)
if collidx.size == 0:
continue
coll = coll[collidx]
tot = coll
while True:
draw[collidx, i] += coll
coll = np.sum(draw[collidx, :i] <= draw[collidx, i, None], axis=-1)
relidx = np.flatnonzero(coll > tot)
if relidx.size == 0:
break
coll, tot = coll[relidx]-tot[relidx], coll[relidx]
collidx = collidx[relidx]
return draw + 1
##Paul Panzer' solution
pp(1_000_000)
Wall time: 557 ms
Thank you all.
Here's a vectorized approach with rand+argsort/argpartition trick from here -
np.random.rand(rows, 50).argpartition(6,axis=1)[:,:6]+1
Sample run -
In [41]: rows = 10
In [42]: np.random.rand(rows, 50).argpartition(6,axis=1)[:,:6]+1
Out[42]:
array([[ 1, 9, 3, 26, 14, 44],
[32, 20, 27, 13, 25, 45],
[40, 12, 47, 16, 10, 29],
[ 6, 36, 32, 16, 18, 4],
[42, 46, 24, 9, 1, 31],
[15, 25, 47, 42, 34, 24],
[ 7, 16, 49, 31, 40, 20],
[28, 17, 47, 36, 8, 44],
[ 7, 42, 14, 4, 17, 35],
[39, 19, 37, 7, 8, 36]])
Just to prove the random-ness -
In [56]: rows = 1000000
In [57]: out = np.random.rand(rows, 50).argpartition(6,axis=1)[:,:6]+1
In [58]: np.bincount(out.ravel())[1:]
Out[58]:
array([120048, 120026, 119942, 119838, 119885, 119669, 119965, 119491,
120280, 120108, 120293, 119399, 119917, 119974, 120195, 119796,
119887, 119505, 120235, 119857, 119499, 120560, 119891, 119693,
120081, 120369, 120011, 119714, 120218, 120581, 120111, 119867,
119791, 120265, 120457, 120048, 119813, 119702, 120266, 120445,
120016, 120190, 119576, 119737, 120153, 120215, 120144, 120196,
120218, 119863])
Timings on one million rows of data -
In [43]: rows = 1000000
In [44]: %timeit np.random.rand(rows, 50).argpartition(6,axis=1)[:,:6]+1
1 loop, best of 3: 1.07 s per loop
This isn't pure numpy but you could wrap your solution within a list comprehension
>>> rows = 10
>>> cols = 6
>>> np.array([np.random.choice(np.arange(1, 50), size=cols, replace=False) for _ in range(rows)])
array([[ 9, 10, 21, 33, 34, 15],
[48, 46, 36, 7, 37, 45],
[21, 15, 5, 9, 31, 26],
[48, 24, 30, 18, 47, 23],
[22, 31, 19, 32, 3, 33],
[35, 44, 15, 46, 20, 43],
[11, 37, 44, 6, 16, 35],
[42, 49, 41, 28, 12, 19],
[19, 6, 32, 3, 1, 22],
[29, 33, 42, 5, 30, 43]])
You can create each row by itself and then stack them.
np.stack([np.random.choice(np.arange(1,50),size=6,replace=False) for i in range(100)])
Here is a constructive approach, draw first (50 choices), second (49 choices) etc. For large sets it's quite competitive (pp in table):
# n = 10
# pp 0.18564210 ms
# Divakar 0.01960790 ms
# James 0.20074140 ms
# CK 0.17823420 ms
# n = 1000
# pp 0.80046050 ms
# Divakar 1.31817130 ms
# James 18.93511460 ms
# CK 20.83670820 ms
# n = 1000000
# pp 655.32905590 ms
# Divakar 1352.44713990 ms
# James 18471.08987370 ms
# CK 18369.79808050 ms
# pp checking plausibility...
# var (exp obs) 208.333333333 208.363840259
# mean (exp obs) 25.5 25.5064865
# Divakar checking plausibility...
# var (exp obs) 208.333333333 208.21113972
# mean (exp obs) 25.5 25.499471
# James checking plausibility...
# var (exp obs) 208.333333333 208.313436938
# mean (exp obs) 25.5 25.4979035
# CK checking plausibility...
# var (exp obs) 208.333333333 208.169585249
# mean (exp obs) 25.5 25.49
Code including benchmarking. Algo is a bit complicated because mapping to free spots is hairy:
import numpy as np
import types
from timeit import timeit
def f_pp(n):
draw = np.empty((n, 6), dtype=int)
# generating random numbers is expensive, so draw a large one and
# make six out of one
draw[:, 0] = np.random.randint(0, 50*49*48*47*46*45, (n,))
draw[:, 1:] = np.arange(50, 45, -1)
draw = np.floor_divide.accumulate(draw, axis=-1)
draw[:, :-1] -= draw[:, 1:] * np.arange(50, 45, -1)
# map the shorter ranges (:49, :48, :47) to the non-occupied
# positions; this amounts to incrementing for each number on the
# left that is not larger. the nasty bit: if due to incrementing
# new numbers on the left are "overtaken" then for them we also
# need to increment.
for i in range(1, 6):
coll = np.sum(draw[:, :i] <= draw[:, i, None], axis=-1)
collidx = np.flatnonzero(coll)
if collidx.size == 0:
continue
coll = coll[collidx]
tot = coll
while True:
draw[collidx, i] += coll
coll = np.sum(draw[collidx, :i] <= draw[collidx, i, None], axis=-1)
relidx = np.flatnonzero(coll > tot)
if relidx.size == 0:
break
coll, tot = coll[relidx]-tot[relidx], coll[relidx]
collidx = collidx[relidx]
return draw + 1
def check_result(draw, name):
print(name[2:], ' checking plausibility...')
import scipy.stats
assert all(len(set(row)) == 6 for row in draw)
assert len(set(draw.ravel())) == 50
print(' var (exp obs)', scipy.stats.uniform(0.5, 50).var(), draw.var())
print(' mean (exp obs)', scipy.stats.uniform(0.5, 50).mean(), draw.mean())
def f_Divakar(n):
return np.random.rand(n, 50).argpartition(6,axis=1)[:,:6]+1
def f_James(n):
return np.stack([np.random.choice(np.arange(1,51),size=6,replace=False) for i in range(n)])
def f_CK(n):
return np.array([np.random.choice(np.arange(1, 51), size=6, replace=False) for _ in range(n)])
for n in (10, 1_000, 1_000_000):
print(f'n = {n}')
for name, func in list(globals().items()):
if not name.startswith('f_') or not isinstance(func, types.FunctionType):
continue
try:
print("{:16s}{:16.8f} ms".format(name[2:], timeit(
'f(n)', globals={'f':func, 'n':n}, number=10)*100))
except:
print("{:16s} apparently failed".format(name[2:]))
if(n >= 10000):
for name, func in list(globals().items()):
if name.startswith('f_') and isinstance(func, types.FunctionType):
check_result(func(n), name)
np.sort(np.random.choice(np.arange(1,50),size=(100,6),replace=False))
I think you should change replace to true as you are just consuming the numbers in the range

How to faster iterate over a Python numpy.ndarray with 2 dimensions

So, i simply want to make this faster:
for x in range(matrix.shape[0]):
for y in range(matrix.shape[1]):
if matrix[x][y] == 2 or matrix[x][y] == 3 or matrix[x][y] == 4 or matrix[x][y] == 5 or matrix[x][y] == 6:
if x not in heights:
heights.append(x)
Simply iterate over a 2x2 matrix (usually round 18x18 or 22x22) and check it's x. But its kinda slow, i wonder which is the fastest way to do this.
Thank you very much!
For a numpy based approach, you can do:
np.flatnonzero(((a>=2) & (a<=6)).any(1))
# array([1, 2, 6], dtype=int64)
Where:
a = np.random.randint(0,30,(7,7))
print(a)
array([[25, 27, 28, 21, 18, 7, 26],
[ 2, 18, 21, 13, 27, 26, 2],
[23, 27, 18, 7, 4, 6, 13],
[25, 20, 19, 15, 8, 22, 0],
[27, 23, 18, 22, 25, 17, 15],
[19, 12, 12, 9, 29, 23, 21],
[16, 27, 22, 23, 8, 3, 11]])
Timings on a larger array:
a = np.random.randint(0,30, (1000,1000))
%%timeit
heights=[]
for x in range(a.shape[0]):
for y in range(a.shape[1]):
if a[x][y] == 2 or a[x][y] == 3 or a[x][y] == 4 or a[x][y] == 5 or a[x][y] == 6:
if x not in heights:
heights.append(x)
# 3.17 s ± 59.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
yatu = np.flatnonzero(((a>=2) & (a<=6)).any(1))
# 965 µs ± 11.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
np.allclose(yatu, heights)
# true
Vectorizing with numpy yields to roughly a 3200x speedup
It looks like you want to find if 2, 3, 4, 5 or 6 appear in the matrix.
You can use np.isin() to create a matrix of true/false values, then use that as an indexer:
>>> arr = np.array([1,2,3,4,4,0]).reshape(2,3)
>>> arr[np.isin(arr, [2,3,4,5,6])]
array([2, 3, 4, 4])
Optionally, turn that into a plain Python set() for faster in lookups and no duplicates.
To get the positions in the array where those numbers appear, use argwhere:
>>> np.argwhere(np.isin(arr, [2,3,4,5,6]))
array([[0, 1],
[0, 2],
[1, 0],
[1, 1]])

Most Efficient Code for clipping the elements of a vector until it reaches a sum

Suppose we have an integer vector that sums to S1. I would like to take this vector, and produce another vector that sums to S2<S1. I'd like to do this by subtracting off the (first) max element one by one until the sum is down below 4.
E.g. clip_by_sum([1,4,8,3], total=10) == [1, 3, 3, 3].
An easy code which does this is:
def clip_to_sum(vec, total):
new_vec = np.array(vec)
current_total = np.sum(vec)
while current_total > total:
i = np.argmax(new_vec)
new_vec[i] -= 1
current_total -= 1
return new_vec
However, it's obviously horribly inefficient, because we only subtract off one element at a time no matter how much the lead vector is leading by.
Anyone have a nifty trick for doing this efficiently?
Edit: An input vector that already sums to less than S1 can be left unchanged, so for example clip_to_sum([1,4,8,3], 20) should be [1,4,8,3]
Edit For those wondering what this is for, it's for the mundane task of determining column widths in a fixed-width table!
You are basically going Robin Hood there and clipping off the values that are above global average w.r.t. total, until the global sum reaches a threshold. Using that theory, we will start off with a baseline number and then loop through, like so -
def clip_until_sum(vec, total):
# Get array version
a = np.asarray(vec)
if a.sum() <= total:
return a
# Baseline number
b = int(total/float(len(a)))
# Setup output
out = np.where(a > b, b, a)
s = out.sum()
# Loop to shift up values starting from baseline
while s<total:
idx = np.flatnonzero(a > out)
dss = total - s
out[idx[max(0,len(idx)-dss):]] += 1
s = out.sum()
return out
Sample runs -
Set #1 :
In [868]: clip_until_sum([1,4,8,3], 10)
Out[868]: array([1, 3, 3, 3])
In [869]: clip_until_sum([1,4,8,3], 11)
Out[869]: array([1, 3, 4, 3])
In [870]: clip_until_sum([1,4,8,3], 12)
Out[870]: array([1, 4, 4, 3])
In [871]: clip_until_sum([1,4,8,3], 13)
Out[871]: array([1, 4, 5, 3])
In [872]: clip_until_sum([1,4,8,3], 14)
Out[872]: array([1, 4, 6, 3])
In [873]: clip_until_sum([1,4,8,3], 15)
Out[873]: array([1, 4, 7, 3])
In [874]: clip_until_sum([1,4,8,3], 16)
Out[874]: array([1, 4, 8, 3])
Set #2 :
In [875]: clip_until_sum([1,4,8,3,5,6], 12)
Out[875]: array([1, 2, 2, 2, 2, 3])
Runtime test and verification -
In [164]: np.random.seed(0)
# Assuming 10000 elems with max of 1000 and total as half of sum
In [165]: vec = np.random.randint(0, 1000, size=10000)
In [167]: total = vec.sum()//2
In [168]: np.allclose(clip_to_sum(vec, total), clip_until_sum(vec, total))
Out[168]: True
In [169]: %timeit clip_to_sum(vec, total)
1 loop, best of 3: 19.1 s per loop
In [170]: %timeit clip_until_sum(vec, total)
100 loops, best of 3: 2.8 ms per loop
# #Warren Weckesser's soln
In [171]: %timeit limit_sum1(vec, total)
1000 loops, best of 3: 733 µs per loop
Here are two functions that compute the clipped array. The first, limit_sum1, will not give exactly the same result as your function, because it, in effect, makes different choices of which "max" to decrease when the the maximum occurs multiple times in the input vector. That is, if vec = [4, 4, 4], and total = 11, there are three possible results: [3, 4, 4], [4, 3, 4], and [4, 4, 3]. Your function gives [3, 4, 4], while limit_sum1 gives [4, 4, 3].
For very small input vectors, like the examples in the question, limit_sum2 is generally faster than limit_sum1, but neither is faster than your clip_to_sum. For somewhat longer input vectors with more varied input range, both are faster than clip_to_sum, and for very long input vectors, limit_sum1 is much faster. Examples with timing are below.
def limit_sum1(vec, total):
x = np.asarray(vec)
delta = x.sum() - total
if delta <= 0:
return x
i = np.argsort(x)
# j is the inverse of the sorting permutation i.
j = np.empty_like(i)
j[i] = np.arange(len(x))[::-1]
y = np.zeros(len(x)+1, dtype=int)
y[1:] = x[i]
d = np.diff(y)[::-1]
y = y[::-1]
wd = d * np.arange(1, len(d)+1)
cs = wd.cumsum()
k = np.searchsorted(cs, delta, side='right')
if k > 0:
y[:k] -= d[:k][::-1].cumsum()[::-1]
delta = delta - cs[k-1]
q, r = divmod(delta, k+1)
y[:k+1] -= q
y[:r] -= 1
x2 = y[j]
return x2
def limit_sum2(vec, total):
a = np.array(vec)
while a.sum() > total:
amax = a.max()
i = np.where(a == amax)[0]
if len(i) < len(a):
nextmax = a[a < amax].max()
else:
nextmax = 0
clip_to_nextmax_delta = len(i)*(amax - nextmax)
diff = a.sum() - total
if clip_to_nextmax_delta > diff:
q, r = divmod(diff, len(i))
a[i] -= q
a[i[:r]] -= 1
break
else:
# Clip all the current max values to nextmax.
a[i] = nextmax
return a
Examples
In [1388]: vec = np.array([1, 4, 8, 3])
limit_sum1, limit_sum2 and clip_to_sum all give the same result:
In [1389]: limit_sum1(vec, total=10)
Out[1389]: array([1, 3, 3, 3])
In [1390]: limit_sum2(vec, total=10)
Out[1390]: array([1, 3, 3, 3])
In [1391]: clip_to_sum(vec, total=10)
Out[1391]: array([1, 3, 3, 3])
clip_to_sum is faster with this small vector.
In [1392]: %timeit limit_sum1(vec, total=10)
33.1 µs ± 272 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [1393]: %timeit limit_sum2(vec, total=10)
24.6 µs ± 138 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [1394]: %timeit clip_to_sum(vec, total=10)
15.6 µs ± 44.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Let's try a longer vector containing bigger values.
In [1405]: np.random.seed(1729)
In [1406]: vec = np.random.randint(0, 100, size=50)
In [1407]: vec
Out[1407]:
array([13, 37, 21, 67, 13, 89, 59, 35, 65, 91, 36, 73, 93, 83, 43, 86, 44,
19, 51, 76, 12, 26, 43, 0, 42, 53, 30, 65, 3, 65, 37, 68, 64, 87,
91, 4, 70, 10, 50, 40, 34, 32, 13, 7, 93, 79, 16, 98, 1, 35])
In [1408]: vec.sum()
Out[1408]: 2362
Find a result using each function:
In [1409]: limit_sum1(vec, total=1500)
Out[1409]:
array([13, 37, 21, 38, 13, 38, 38, 35, 38, 38, 36, 38, 38, 38, 38, 38, 38,
19, 38, 38, 12, 26, 38, 0, 39, 38, 30, 38, 3, 38, 37, 38, 38, 38,
38, 4, 38, 10, 38, 39, 34, 32, 13, 7, 38, 38, 16, 38, 1, 35])
In [1410]: limit_sum2(vec, total=1500)
Out[1410]:
array([13, 37, 21, 38, 13, 38, 38, 35, 38, 38, 36, 38, 38, 38, 38, 38, 38,
19, 38, 38, 12, 26, 38, 0, 38, 38, 30, 38, 3, 38, 37, 38, 38, 38,
38, 4, 38, 10, 38, 38, 34, 32, 13, 7, 38, 39, 16, 39, 1, 35])
In [1411]: clip_to_sum(vec, total=1500)
Out[1411]:
array([13, 37, 21, 38, 13, 38, 38, 35, 38, 38, 36, 38, 38, 38, 38, 38, 38,
19, 38, 38, 12, 26, 38, 0, 38, 38, 30, 38, 3, 38, 37, 38, 38, 38,
38, 4, 38, 10, 38, 38, 34, 32, 13, 7, 38, 39, 16, 39, 1, 35])
This time, limit_sum1 is the fastest by a wide margin:
In [1413]: %timeit limit_sum1(vec, total=1500)
34.9 µs ± 257 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [1414]: %timeit limit_sum2(vec, total=1500)
272 µs ± 2.12 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [1415]: %timeit clip_to_sum(vec, total=1500)
1.74 ms ± 7.08 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
You can modify your function to include a difference between the max and second max elements. This will use additional compute resources per loop but should reduce the total number of loops significantly.
I've tested this versus your original function and it gives the same results. Though, admittedly, I am having difficulty seeing any real speed up between the two.
def clip_to_sum(vec, total):
current_total = np.sum(vec)
new_vec = np.array(vec)
while current_total > total:
i = np.argmax(new_vec)
d = np.partition(new_vec.flatten(), -2)[-2]
diff = new_vec[i] - d
if not (new_vec[i] == diff) and diff > 0:
new_vec[i] -= diff
current_total -= diff
else:
new_vec[i] -= 1
current_total -= 1
return new_vec
Unfortunately it's not realy obvious what type of result you are interested in. But let's assume you have an array of a certain length and you want to take out the first elements A[0:ix] so that the sum is (somehow close to) S1, you can do:
S1 = 5
A = np.array([1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,])
B = np.cumsum(A)
ix = np.argmax(B>=S1)+1
C = A[0:ix]
print("C = ", C); print("sum C = ", np.sum(C))
which gives
C = [1 1 1 1 1]
sum C = 5
You can write the same in 1 line
C = A[0:np.argmax(np.cumsum(A)>=S1)+1]

Is there any way to find top-left and right-bottom pixels of cropped image from a full image using Python? [duplicate]

I have a large NumPy.array field_array and a smaller array match_array, both consisting of int values. Using the following example, how can I check if any match_array-shaped segment of field_array contains values that exactly correspond to the ones in match_array?
import numpy
raw_field = ( 24, 25, 26, 27, 28, 29, 30, 31, 23, \
33, 34, 35, 36, 37, 38, 39, 40, 32, \
-39, -38, -37, -36, -35, -34, -33, -32, -40, \
-30, -29, -28, -27, -26, -25, -24, -23, -31, \
-21, -20, -19, -18, -17, -16, -15, -14, -22, \
-12, -11, -10, -9, -8, -7, -6, -5, -13, \
-3, -2, -1, 0, 1, 2, 3, 4, -4, \
6, 7, 8, 4, 5, 6, 7, 13, 5, \
15, 16, 17, 8, 9, 10, 11, 22, 14)
field_array = numpy.array(raw_field, int).reshape(9,9)
match_array = numpy.arange(12).reshape(3,4)
These examples ought to return True since the pattern described by match_array aligns over [6:9,3:7].
Approach #1
This approach derives from a solution to Implement Matlab's im2col 'sliding' in python that was designed to rearrange sliding blocks from a 2D array into columns. Thus, to solve our case here, those sliding blocks from field_array could be stacked as columns and compared against column vector version of match_array.
Here's a formal definition of the function for the rearrangement/stacking -
def im2col(A,BLKSZ):
# Parameters
M,N = A.shape
col_extent = N - BLKSZ[1] + 1
row_extent = M - BLKSZ[0] + 1
# Get Starting block indices
start_idx = np.arange(BLKSZ[0])[:,None]*N + np.arange(BLKSZ[1])
# Get offsetted indices across the height and width of input array
offset_idx = np.arange(row_extent)[:,None]*N + np.arange(col_extent)
# Get all actual indices & index into input array for final output
return np.take (A,start_idx.ravel()[:,None] + offset_idx.ravel())
To solve our case, here's the implementation based on im2col -
# Get sliding blocks of shape same as match_array from field_array into columns
# Then, compare them with a column vector version of match array.
col_match = im2col(field_array,match_array.shape) == match_array.ravel()[:,None]
# Shape of output array that has field_array compared against a sliding match_array
out_shape = np.asarray(field_array.shape) - np.asarray(match_array.shape) + 1
# Now, see if all elements in a column are ONES and reshape to out_shape.
# Finally, find the position of TRUE indices
R,C = np.where(col_match.all(0).reshape(out_shape))
The output for the given sample in the question would be -
In [151]: R,C
Out[151]: (array([6]), array([3]))
Approach #2
Given that opencv already has template matching function that does square of differences, you can employ that and look for zero differences, which would be your matching positions. So, if you have access to cv2 (opencv module), the implementation would look something like this -
import cv2
from cv2 import matchTemplate as cv2m
M = cv2m(field_array.astype('uint8'),match_array.astype('uint8'),cv2.TM_SQDIFF)
R,C = np.where(M==0)
giving us -
In [204]: R,C
Out[204]: (array([6]), array([3]))
Benchmarking
This section compares runtimes for all the approaches suggested to solve the question. The credit for the various methods listed in this section goes to their contributors.
Method definitions -
def seek_array(search_in, search_for, return_coords = False):
si_x, si_y = search_in.shape
sf_x, sf_y = search_for.shape
for y in xrange(si_y-sf_y+1):
for x in xrange(si_x-sf_x+1):
if numpy.array_equal(search_for, search_in[x:x+sf_x, y:y+sf_y]):
return (x,y) if return_coords else True
return None if return_coords else False
def skimage_based(field_array,match_array):
windows = view_as_windows(field_array, match_array.shape)
return (windows == match_array).all(axis=(2,3)).nonzero()
def im2col_based(field_array,match_array):
col_match = im2col(field_array,match_array.shape)==match_array.ravel()[:,None]
out_shape = np.asarray(field_array.shape) - np.asarray(match_array.shape) + 1
return np.where(col_match.all(0).reshape(out_shape))
def cv2_based(field_array,match_array):
M = cv2m(field_array.astype('uint8'),match_array.astype('uint8'),cv2.TM_SQDIFF)
return np.where(M==0)
Runtime tests -
Case # 1 (Sample data from question):
In [11]: field_array
Out[11]:
array([[ 24, 25, 26, 27, 28, 29, 30, 31, 23],
[ 33, 34, 35, 36, 37, 38, 39, 40, 32],
[-39, -38, -37, -36, -35, -34, -33, -32, -40],
[-30, -29, -28, -27, -26, -25, -24, -23, -31],
[-21, -20, -19, -18, -17, -16, -15, -14, -22],
[-12, -11, -10, -9, -8, -7, -6, -5, -13],
[ -3, -2, -1, 0, 1, 2, 3, 4, -4],
[ 6, 7, 8, 4, 5, 6, 7, 13, 5],
[ 15, 16, 17, 8, 9, 10, 11, 22, 14]])
In [12]: match_array
Out[12]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
In [13]: %timeit seek_array(field_array, match_array, return_coords = False)
1000 loops, best of 3: 465 µs per loop
In [14]: %timeit skimage_based(field_array,match_array)
10000 loops, best of 3: 97.9 µs per loop
In [15]: %timeit im2col_based(field_array,match_array)
10000 loops, best of 3: 74.3 µs per loop
In [16]: %timeit cv2_based(field_array,match_array)
10000 loops, best of 3: 30 µs per loop
Case #2 (Bigger random data):
In [17]: field_array = np.random.randint(0,4,(256,256))
In [18]: match_array = field_array[100:116,100:116].copy()
In [19]: %timeit seek_array(field_array, match_array, return_coords = False)
1 loops, best of 3: 400 ms per loop
In [20]: %timeit skimage_based(field_array,match_array)
10 loops, best of 3: 54.3 ms per loop
In [21]: %timeit im2col_based(field_array,match_array)
10 loops, best of 3: 125 ms per loop
In [22]: %timeit cv2_based(field_array,match_array)
100 loops, best of 3: 4.08 ms per loop
There's no such search function built in to NumPy, but it is certainly possible to do in NumPy
As long as your arrays are not too massive*, you could use a rolling window approach:
from skimage.util import view_as_windows
windows = view_as_windows(field_array, match_array.shape)
The function view_as_windows is written purely in NumPy so if you don't have skimage you can always copy the code from here.
Then to see if the sub-array appears in the larger array, you can write:
>>> (windows == match_array).all(axis=(2,3)).any()
True
To find the indices of where the top-left corner of the sub-array matches, you can write:
>>> (windows == match_array).all(axis=(2,3)).nonzero()
(array([6]), array([3]))
This approach should also work for arrays of higher dimensions.
*although the array windows takes up no additional memory (only the strides and shape are changed to create a new view of the data), writing windows == match_array creates a boolean array of size (7, 6, 3, 4) which is 504 bytes of memory. If you're working with very large arrays, this approach might not be feasible.
One solution is to search the entire search_in array block-at-a-time (a 'block' being a search_for-shaped slice) until either a matching segment is found or the search_for array is exhausted. I can use it to get coordinates for the matching block, or just a bool result by sending True or False for the return_coords optional argument...
def seek_array(search_in, search_for, return_coords = False):
"""Searches for a contiguous instance of a 2d array `search_for` within a larger `search_in` 2d array.
If the optional argument return_coords is True, the xy coordinates of the zeroeth value of the first matching segment of search_in will be returned, or None if there is no matching segment.
If return_coords is False, a boolean will be returned.
* Both arrays must be sent as two-dimensional!"""
si_x, si_y = search_in.shape
sf_x, sf_y = search_for.shape
for y in xrange(si_y-sf_y+1):
for x in xrange(si_x-sf_x+1):
if numpy.array_equal(search_for, search_in[x:x+sf_x, y:y+sf_y]):
return (x,y) if return_coords else True # don't forget that coordinates are transposed when viewing NumPy arrays!
return None if return_coords else False
I wonder if NumPy doesn't already have a function that can do the same thing, though...
To add to the answers already posted, I'd like to add one that takes into account errors due to floating point precision in case that matrices come from, let's say, image processing for instance, where numbers are subject to floating point operations.
You can recurse the indexes of the larger matrix, searching for the smaller matrix. Then you can extract a submatrix of the larger matrix matching the size of the smaller matrix.
You have a match if the contents of both, the submatrix of 'large' and the 'small' matrix match.
The following example shows how to return the first indexes of the location in the large matrix found to match. It would be trivial to extend this function to return an array of locations found to match if that's the intent.
import numpy as np
def find_submatrix(a, b):
""" Searches the first instance at which 'b' is a submatrix of 'a', iterates
rows first. Returns the indexes of a at which 'b' was found, or None if
'b' is not contained within 'a'"""
a_rows=a.shape[0]
a_cols=a.shape[1]
b_rows=b.shape[0]
b_cols=b.shape[1]
row_diff = a_rows - b_rows
col_diff = a_cols - b_cols
for idx_row in np.arange(row_diff):
for idx_col in np.arange(col_diff):
row_indexes = [idx + idx_row for idx in np.arange(b_rows)]
col_indexes = [idx + idx_col for idx in np.arange(b_cols)]
submatrix_indexes = np.ix_(row_indexes, col_indexes)
a_submatrix = a[submatrix_indexes]
are_equal = np.allclose(a_submatrix, b) # allclose is used for floating point numbers, if they
# are close while comparing, they are considered equal.
# Useful if your matrices come from operations that produce
# floating point numbers.
# You might want to fine tune the parameters to allclose()
if (are_equal):
return[idx_col, idx_row]
return None
Using the function above you can run the following example:
large_mtx = np.array([[1, 2, 3, 7, 4, 2, 6],
[4, 5, 6, 2, 1, 3, 11],
[10, 4, 2, 1, 3, 7, 6],
[4, 2, 1, 3, 7, 6, -3],
[5, 6, 2, 1, 3, 11, -1],
[0, 0, -1, 5, 4, -1, 2],
[10, 4, 2, 1, 3, 7, 6],
[10, 4, 2, 1, 3, 7, 6]
])
# Example 1: An intersection at column 2 and row 1 of large_mtx
small_mtx_1 = np.array([[4, 2], [2,1]])
intersect = find_submatrix(large_mtx, small_mtx_1)
print "Example 1, intersection (col,row): " + str(intersect)
# Example 2: No intersection
small_mtx_2 = np.array([[-14, 2], [2,1]])
intersect = find_submatrix(large_mtx, small_mtx_2)
print "Example 2, intersection (col,row): " + str(intersect)
Which would print:
Example 1, intersection: [1, 2]
Example 2, intersection: None
Here's a solution using the as_strided() function from stride_tricks module
import numpy as np
from numpy.lib.stride_tricks import as_strided
# field_array (I modified it to have two matching arrays)
A = np.array([[ 24, 25, 26, 27, 28, 29, 30, 31, 23],
[ 33, 0, 1, 2, 3, 38, 39, 40, 32],
[-39, 4, 5, 6, 7, -34, -33, -32, -40],
[-30, 8, 9, 10, 11, -25, -24, -23, -31],
[-21, -20, -19, -18, -17, -16, -15, -14, -22],
[-12, -11, -10, -9, -8, -7, -6, -5, -13],
[ -3, -2, -1, 0, 1, 2, 3, 4, -4],
[ 6, 7, 8, 4, 5, 6, 7, 13, 5],
[ 15, 16, 17, 8, 9, 10, 11, 22, 14]])
# match_array
B = np.arange(12).reshape(3,4)
# Window view of A
A_w = as_strided(A, shape=(A.shape[0] - B.shape[0] + 1,
A.shape[1] - B.shape[1] + 1,
B.shape[0], B.shape[1]),
strides=2*A.strides).reshape(-1, B.shape[0], B.shape[1])
match = (A_w == B).all(axis=(1,2))
We can also find the indices of the first element of each matching block in A
where = np.where(match)[0]
ind_flat = where + (B.shape[1] - 1)*(np.floor(where/(A.shape[1] - B.shape[1] + 1)).astype(int))
ind = [tuple(row) for row in np.array(np.unravel_index(ind_flat, A.shape)).T]
Result
print(match.any())
True
print(ind)
[(1, 1), (6, 3)]

numpy find two dimensional array within another [duplicate]

I have a large NumPy.array field_array and a smaller array match_array, both consisting of int values. Using the following example, how can I check if any match_array-shaped segment of field_array contains values that exactly correspond to the ones in match_array?
import numpy
raw_field = ( 24, 25, 26, 27, 28, 29, 30, 31, 23, \
33, 34, 35, 36, 37, 38, 39, 40, 32, \
-39, -38, -37, -36, -35, -34, -33, -32, -40, \
-30, -29, -28, -27, -26, -25, -24, -23, -31, \
-21, -20, -19, -18, -17, -16, -15, -14, -22, \
-12, -11, -10, -9, -8, -7, -6, -5, -13, \
-3, -2, -1, 0, 1, 2, 3, 4, -4, \
6, 7, 8, 4, 5, 6, 7, 13, 5, \
15, 16, 17, 8, 9, 10, 11, 22, 14)
field_array = numpy.array(raw_field, int).reshape(9,9)
match_array = numpy.arange(12).reshape(3,4)
These examples ought to return True since the pattern described by match_array aligns over [6:9,3:7].
Approach #1
This approach derives from a solution to Implement Matlab's im2col 'sliding' in python that was designed to rearrange sliding blocks from a 2D array into columns. Thus, to solve our case here, those sliding blocks from field_array could be stacked as columns and compared against column vector version of match_array.
Here's a formal definition of the function for the rearrangement/stacking -
def im2col(A,BLKSZ):
# Parameters
M,N = A.shape
col_extent = N - BLKSZ[1] + 1
row_extent = M - BLKSZ[0] + 1
# Get Starting block indices
start_idx = np.arange(BLKSZ[0])[:,None]*N + np.arange(BLKSZ[1])
# Get offsetted indices across the height and width of input array
offset_idx = np.arange(row_extent)[:,None]*N + np.arange(col_extent)
# Get all actual indices & index into input array for final output
return np.take (A,start_idx.ravel()[:,None] + offset_idx.ravel())
To solve our case, here's the implementation based on im2col -
# Get sliding blocks of shape same as match_array from field_array into columns
# Then, compare them with a column vector version of match array.
col_match = im2col(field_array,match_array.shape) == match_array.ravel()[:,None]
# Shape of output array that has field_array compared against a sliding match_array
out_shape = np.asarray(field_array.shape) - np.asarray(match_array.shape) + 1
# Now, see if all elements in a column are ONES and reshape to out_shape.
# Finally, find the position of TRUE indices
R,C = np.where(col_match.all(0).reshape(out_shape))
The output for the given sample in the question would be -
In [151]: R,C
Out[151]: (array([6]), array([3]))
Approach #2
Given that opencv already has template matching function that does square of differences, you can employ that and look for zero differences, which would be your matching positions. So, if you have access to cv2 (opencv module), the implementation would look something like this -
import cv2
from cv2 import matchTemplate as cv2m
M = cv2m(field_array.astype('uint8'),match_array.astype('uint8'),cv2.TM_SQDIFF)
R,C = np.where(M==0)
giving us -
In [204]: R,C
Out[204]: (array([6]), array([3]))
Benchmarking
This section compares runtimes for all the approaches suggested to solve the question. The credit for the various methods listed in this section goes to their contributors.
Method definitions -
def seek_array(search_in, search_for, return_coords = False):
si_x, si_y = search_in.shape
sf_x, sf_y = search_for.shape
for y in xrange(si_y-sf_y+1):
for x in xrange(si_x-sf_x+1):
if numpy.array_equal(search_for, search_in[x:x+sf_x, y:y+sf_y]):
return (x,y) if return_coords else True
return None if return_coords else False
def skimage_based(field_array,match_array):
windows = view_as_windows(field_array, match_array.shape)
return (windows == match_array).all(axis=(2,3)).nonzero()
def im2col_based(field_array,match_array):
col_match = im2col(field_array,match_array.shape)==match_array.ravel()[:,None]
out_shape = np.asarray(field_array.shape) - np.asarray(match_array.shape) + 1
return np.where(col_match.all(0).reshape(out_shape))
def cv2_based(field_array,match_array):
M = cv2m(field_array.astype('uint8'),match_array.astype('uint8'),cv2.TM_SQDIFF)
return np.where(M==0)
Runtime tests -
Case # 1 (Sample data from question):
In [11]: field_array
Out[11]:
array([[ 24, 25, 26, 27, 28, 29, 30, 31, 23],
[ 33, 34, 35, 36, 37, 38, 39, 40, 32],
[-39, -38, -37, -36, -35, -34, -33, -32, -40],
[-30, -29, -28, -27, -26, -25, -24, -23, -31],
[-21, -20, -19, -18, -17, -16, -15, -14, -22],
[-12, -11, -10, -9, -8, -7, -6, -5, -13],
[ -3, -2, -1, 0, 1, 2, 3, 4, -4],
[ 6, 7, 8, 4, 5, 6, 7, 13, 5],
[ 15, 16, 17, 8, 9, 10, 11, 22, 14]])
In [12]: match_array
Out[12]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
In [13]: %timeit seek_array(field_array, match_array, return_coords = False)
1000 loops, best of 3: 465 µs per loop
In [14]: %timeit skimage_based(field_array,match_array)
10000 loops, best of 3: 97.9 µs per loop
In [15]: %timeit im2col_based(field_array,match_array)
10000 loops, best of 3: 74.3 µs per loop
In [16]: %timeit cv2_based(field_array,match_array)
10000 loops, best of 3: 30 µs per loop
Case #2 (Bigger random data):
In [17]: field_array = np.random.randint(0,4,(256,256))
In [18]: match_array = field_array[100:116,100:116].copy()
In [19]: %timeit seek_array(field_array, match_array, return_coords = False)
1 loops, best of 3: 400 ms per loop
In [20]: %timeit skimage_based(field_array,match_array)
10 loops, best of 3: 54.3 ms per loop
In [21]: %timeit im2col_based(field_array,match_array)
10 loops, best of 3: 125 ms per loop
In [22]: %timeit cv2_based(field_array,match_array)
100 loops, best of 3: 4.08 ms per loop
There's no such search function built in to NumPy, but it is certainly possible to do in NumPy
As long as your arrays are not too massive*, you could use a rolling window approach:
from skimage.util import view_as_windows
windows = view_as_windows(field_array, match_array.shape)
The function view_as_windows is written purely in NumPy so if you don't have skimage you can always copy the code from here.
Then to see if the sub-array appears in the larger array, you can write:
>>> (windows == match_array).all(axis=(2,3)).any()
True
To find the indices of where the top-left corner of the sub-array matches, you can write:
>>> (windows == match_array).all(axis=(2,3)).nonzero()
(array([6]), array([3]))
This approach should also work for arrays of higher dimensions.
*although the array windows takes up no additional memory (only the strides and shape are changed to create a new view of the data), writing windows == match_array creates a boolean array of size (7, 6, 3, 4) which is 504 bytes of memory. If you're working with very large arrays, this approach might not be feasible.
One solution is to search the entire search_in array block-at-a-time (a 'block' being a search_for-shaped slice) until either a matching segment is found or the search_for array is exhausted. I can use it to get coordinates for the matching block, or just a bool result by sending True or False for the return_coords optional argument...
def seek_array(search_in, search_for, return_coords = False):
"""Searches for a contiguous instance of a 2d array `search_for` within a larger `search_in` 2d array.
If the optional argument return_coords is True, the xy coordinates of the zeroeth value of the first matching segment of search_in will be returned, or None if there is no matching segment.
If return_coords is False, a boolean will be returned.
* Both arrays must be sent as two-dimensional!"""
si_x, si_y = search_in.shape
sf_x, sf_y = search_for.shape
for y in xrange(si_y-sf_y+1):
for x in xrange(si_x-sf_x+1):
if numpy.array_equal(search_for, search_in[x:x+sf_x, y:y+sf_y]):
return (x,y) if return_coords else True # don't forget that coordinates are transposed when viewing NumPy arrays!
return None if return_coords else False
I wonder if NumPy doesn't already have a function that can do the same thing, though...
To add to the answers already posted, I'd like to add one that takes into account errors due to floating point precision in case that matrices come from, let's say, image processing for instance, where numbers are subject to floating point operations.
You can recurse the indexes of the larger matrix, searching for the smaller matrix. Then you can extract a submatrix of the larger matrix matching the size of the smaller matrix.
You have a match if the contents of both, the submatrix of 'large' and the 'small' matrix match.
The following example shows how to return the first indexes of the location in the large matrix found to match. It would be trivial to extend this function to return an array of locations found to match if that's the intent.
import numpy as np
def find_submatrix(a, b):
""" Searches the first instance at which 'b' is a submatrix of 'a', iterates
rows first. Returns the indexes of a at which 'b' was found, or None if
'b' is not contained within 'a'"""
a_rows=a.shape[0]
a_cols=a.shape[1]
b_rows=b.shape[0]
b_cols=b.shape[1]
row_diff = a_rows - b_rows
col_diff = a_cols - b_cols
for idx_row in np.arange(row_diff):
for idx_col in np.arange(col_diff):
row_indexes = [idx + idx_row for idx in np.arange(b_rows)]
col_indexes = [idx + idx_col for idx in np.arange(b_cols)]
submatrix_indexes = np.ix_(row_indexes, col_indexes)
a_submatrix = a[submatrix_indexes]
are_equal = np.allclose(a_submatrix, b) # allclose is used for floating point numbers, if they
# are close while comparing, they are considered equal.
# Useful if your matrices come from operations that produce
# floating point numbers.
# You might want to fine tune the parameters to allclose()
if (are_equal):
return[idx_col, idx_row]
return None
Using the function above you can run the following example:
large_mtx = np.array([[1, 2, 3, 7, 4, 2, 6],
[4, 5, 6, 2, 1, 3, 11],
[10, 4, 2, 1, 3, 7, 6],
[4, 2, 1, 3, 7, 6, -3],
[5, 6, 2, 1, 3, 11, -1],
[0, 0, -1, 5, 4, -1, 2],
[10, 4, 2, 1, 3, 7, 6],
[10, 4, 2, 1, 3, 7, 6]
])
# Example 1: An intersection at column 2 and row 1 of large_mtx
small_mtx_1 = np.array([[4, 2], [2,1]])
intersect = find_submatrix(large_mtx, small_mtx_1)
print "Example 1, intersection (col,row): " + str(intersect)
# Example 2: No intersection
small_mtx_2 = np.array([[-14, 2], [2,1]])
intersect = find_submatrix(large_mtx, small_mtx_2)
print "Example 2, intersection (col,row): " + str(intersect)
Which would print:
Example 1, intersection: [1, 2]
Example 2, intersection: None
Here's a solution using the as_strided() function from stride_tricks module
import numpy as np
from numpy.lib.stride_tricks import as_strided
# field_array (I modified it to have two matching arrays)
A = np.array([[ 24, 25, 26, 27, 28, 29, 30, 31, 23],
[ 33, 0, 1, 2, 3, 38, 39, 40, 32],
[-39, 4, 5, 6, 7, -34, -33, -32, -40],
[-30, 8, 9, 10, 11, -25, -24, -23, -31],
[-21, -20, -19, -18, -17, -16, -15, -14, -22],
[-12, -11, -10, -9, -8, -7, -6, -5, -13],
[ -3, -2, -1, 0, 1, 2, 3, 4, -4],
[ 6, 7, 8, 4, 5, 6, 7, 13, 5],
[ 15, 16, 17, 8, 9, 10, 11, 22, 14]])
# match_array
B = np.arange(12).reshape(3,4)
# Window view of A
A_w = as_strided(A, shape=(A.shape[0] - B.shape[0] + 1,
A.shape[1] - B.shape[1] + 1,
B.shape[0], B.shape[1]),
strides=2*A.strides).reshape(-1, B.shape[0], B.shape[1])
match = (A_w == B).all(axis=(1,2))
We can also find the indices of the first element of each matching block in A
where = np.where(match)[0]
ind_flat = where + (B.shape[1] - 1)*(np.floor(where/(A.shape[1] - B.shape[1] + 1)).astype(int))
ind = [tuple(row) for row in np.array(np.unravel_index(ind_flat, A.shape)).T]
Result
print(match.any())
True
print(ind)
[(1, 1), (6, 3)]

Categories

Resources