Create specific array using numpy - python

I wanted to create this kind of array using numpy:
[[[0,0,0], [1,0,0], ..., [1919,0,0]],
[[0,1,0], [1,1,0], ..., [1919,1,0]],
...,
[[0,1019,0], [1,1019,0], ..., [1919,1019,0]]]
To which I can access via:
>>> data[25][37]
array([25, 37, 0])
I've tried to create an array this way, but it's not complete:
>>> data = np.mgrid[0:1920:1, 0:1080:1].swapaxes(0,2).swapaxes(0,1)
>>> data[25][37]
array([25, 37])
Do you have any idea how to solve this using numpy?

Approach #1 : Here's one way with np.ogrid and array-initialization -
def indices_zero_grid(m,n):
I,J = np.ogrid[:m,:n]
out = np.zeros((m,n,3), dtype=int)
out[...,0] = I
out[...,1] = J
return out
Sample run -
In [550]: out = indices_zero_grid(1920,1080)
In [551]: out[25,37]
Out[551]: array([25, 37, 0])
Approach #2 : A modification of #senderle's cartesian_product and also inspired by #unutbu's modification to it -
import functools
def indices_zero_grid_v2(m,n):
"""
Based on cartesian_product
http://stackoverflow.com/a/11146645 (#senderle)
Inspired by : https://stackoverflow.com/a/46135435 (#unutbu)
"""
shape = m,n
arrays = [np.arange(s, dtype='int') for s in shape]
broadcastable = np.ix_(*arrays)
broadcasted = np.broadcast_arrays(*broadcastable)
rows, cols = functools.reduce(np.multiply, broadcasted[0].shape), \
len(broadcasted)+1
out = np.zeros(rows * cols, dtype=int)
start, end = 0, rows
for a in broadcasted:
out[start:end] = a.reshape(-1)
start, end = end, end + rows
return out.reshape(-1,m,n).transpose(1,2,0)
Runtime test -
In [2]: %timeit indices_zero_grid(1920,1080)
100 loops, best of 3: 8.4 ms per loop
In [3]: %timeit indices_zero_grid_v2(1920,1080)
100 loops, best of 3: 8.14 ms per loop

In [50]: data = np.mgrid[:1920, :1080, :1].transpose(1,2,3,0)[..., 0, :]
In [51]: data[25][37]
Out[51]: array([25, 37, 0])
Note that data[25][37] two calls to __getitem__. With NumPy arrays, you can access the same value more efficiently (with one call to __getitem__) using data[25, 37]:
In [54]: data[25, 37]
Out[54]: array([25, 37, 0])

Related

Recursive function to apply any function to N arrays of any length to form one jagged multidimensional array of N dimensions

Given N input arrays, all of any length, I would like to be able to apply a function to all combinations of every combination of each arrays.
For example:
Given input arrays:
[1, 2]
[3, 4, 5]
[6, 7, 8, 9]
And a function which returns the product of N elements
I would like to be able to apply a function to every combination of these elements. In this case it results in a 3 dimensional array, of lengths 2, 3, and 4 respectively.
The resulting array would look like this:
[
[
[18, 21, 24, 27],
[24, 28, 32, 36],
[30, 35, 40, 45]
],
[
[36, 42, 48, 54],
[48, 56, 64, 72],
[60, 70, 80, 90]
]
]
An alternative approach using np.frompyfunc to create a ufunc of the required function. This is the applied with the ufuncs .outer method n-1 times for the n arguments.
import numpy as np
def testfunc( a, b):
return a*(a+b) + b*b
def apply_func( func, *args, dtype = np.float ):
""" Apply func sequentially to the args
"""
u_func = np.frompyfunc( func, 2, 1) # Create a ufunc from func
result = np.array(args[0])
for vec in args[1:]:
result = u_func.outer( result, vec ) # apply the outer method of the ufunc
# This returns arrays of object type.
return np.array(result, dtype = dtype) # Convert to type and return the result
apply_func(lambda x,y: x*y, [1,2], [3,4,5],[6,7,8,9] )
# array([[[18., 21., 24., 27.],
# [24., 28., 32., 36.],
# [30., 35., 40., 45.]],
# [[36., 42., 48., 54.],
# [48., 56., 64., 72.],
# [60., 70., 80., 90.]]])
apply_func( testfunc, [1,2], [3,4,5],[6,7,8,9])
# array([[[ 283., 309., 337., 367.],
# [ 603., 637., 673., 711.],
# [1183., 1227., 1273., 1321.]],
# [[ 511., 543., 577., 613.],
# [ 988., 1029., 1072., 1117.],
# [1791., 1843., 1897., 1953.]]])
Let we are given N arrays that has size of n1, n2, ..., nN.
Then, we can part this problem as (N-1) computations of two arrays.
In first computation, compute product of n1, n2.
Let the output is result1.
In second computation, compute product of result1, n3.
Let the output is result2.
.
.
In last computation, compute product of result(N-2), nN.
Let the output is result(N-1).
You would know that the size of result1 is n2 _ n1,
the size of result2 is n3 _ n2 _ n1.
.
.
As you could infer, the size of result(N-1) is n(N) _ n(N-1) _ ... _ n2 * n1.
Now let we are given two arrays: result(k-1), and arr(k).
Then we should get product of each elements from result(k-1) and arr(k).
Cause result(k-1) has size of n(k-1) _ n(k-2) _ ... _ n1, arr(k) has size of n(k),
The output array (result(k)) should have size of n(k) _ n(k-1) _ ... _ n1.
It means the solution of this problem is dot product of transposed n(k) and result(k-1).
So, the function should be like below.
productOfTwoArrays = lambda arr1, arr2: np.dot(arr2.T, arr1)
So now we solve the first problem.
What left is just applying this to all N arrays.
So the solution might be iterative.
Let the input array has N arrays.
def productOfNArrays(Narray: list) -> list:
result = Narray[0]
N = len(Narray)
for idx in range(1, N):
result = productOfTwoArrays(result, Narray[idx])
return result
The whole code might be below.
def productOfNArrays(Narray: list) -> list:
import numpy as np
productOfTwoArrays = lambda arr1, arr2: np.dot(arr2.T, arr1)
result = Narray[0]
N = len(Narray)
for idx in range(1, N):
result = productOfTwoArrays(result, Narray[idx])
return result
You can do this with broadcasting:
import numpy as np
a = np.array([1, 2, 3])
b = np.array([4, 5])
c = a[None, ...] * b[..., None]
print(c)
Output:
[[ 4 8 12]
[ 5 10 15]]
This can be easily generalized by crafting the appropriate slicing to be passed to the operands.
EDIT
An implementation of such generalization could be:
import numpy as np
def apply_multi_broadcast_1d(func, dim1_arrs):
n = len(dim1_arrs)
iter_dim1_arrs = iter(dim1_arrs)
slicing = tuple(
slice(None) if j == 0 else None
for j in range(n))
result = next(iter_dim1_arrs)[slicing]
for i, dim1_arr in enumerate(iter_dim1_arrs, 1):
slicing = tuple(
slice(None) if j == i else None
for j in range(n))
result = func(result, dim1_arr[slicing])
return result
dim1_arrs = [np.arange(1, n + 1) for n in range(2, 5)]
print(dim1_arrs)
# [array([1, 2]), array([1, 2, 3]), array([1, 2, 3, 4])]
arr = apply_multi_broadcast_1d(lambda x, y: x * y, dim1_arrs)
print(arr.shape)
# (2, 3, 4)
print(arr)
# [[[ 1 2 3 4]
# [ 2 4 6 8]
# [ 3 6 9 12]]
# [[ 2 4 6 8]
# [ 4 8 12 16]
# [ 6 12 18 24]]]
There is no need for recursion here, and I am not sure how it could be beneficial.
Another approach is to generate a np.ufunc from a Python function (as proposed in #TlsChris's answer) and use its np.ufunc.outer() method:
import numpy as np
def apply_multi_outer(func, dim1_arrs):
ufunc = np.frompyfunc(func, 2, 1)
iter_dim1_arrs = iter(dim1_arrs)
result = next(iter_dim1_arrs)
for dim1_arr in iter_dim1_arrs:
result = ufunc.outer(result, dim1_arr)
return result
While this would give identical results (for 1D arrays), this is slower (from slightly to considerably depending on the input sizes) than the broadcasting approach.
Also, while apply_multi_broadcast_1d() is limited to 1-dim inputs, apply_multi_outer() would work for input arrays of higher dimensionality too. The broadcasting approach can be easily adapted to higher dimensionality inputs, as shown below.
EDIT 2
A generalization of apply_multi_broadcast_1d() to N-dim inputs, including a separation of the broadcasting from the function application, follows:
import numpy as np
def multi_broadcast(arrs):
for i, arr in enumerate(arrs):
yield arr[tuple(
slice(None) if j == i else None
for j, arr in enumerate(arrs) for d in arr.shape)]
def apply_multi_broadcast(func, arrs):
gen_arrs = multi_broadcast(arrs)
result = next(gen_arrs)
for i, arr in enumerate(gen_arrs, 1):
result = func(result, arr)
return result
The benchmarks for the three show that apply_multi_broadcast() is marginally slower than apply_multi_broadcast_1d() but faster than apply_multi_outer():
def f(x, y):
return x * y
dim1_arrs = [np.arange(1, n + 1) for n in range(2, 5)]
print(np.all(apply_multi_outer(f, dim1_arrs) == apply_multi_broadcast_1d(f, dim1_arrs)))
print(np.all(apply_multi_outer(f, dim1_arrs) == apply_multi_broadcast(f, dim1_arrs)))
# True
# True
%timeit apply_multi_broadcast_1d(f, dim1_arrs)
# 100000 loops, best of 3: 7.76 µs per loop
%timeit apply_multi_outer(f, dim1_arrs)
# 100000 loops, best of 3: 9.46 µs per loop
%timeit apply_multi_broadcast(f, dim1_arrs)
# 100000 loops, best of 3: 8.63 µs per loop
dim1_arrs = [np.arange(1, n + 1) for n in range(10, 16)]
print(np.all(apply_multi_outer(f, dim1_arrs) == apply_multi_broadcast_1d(f, dim1_arrs)))
print(np.all(apply_multi_outer(f, dim1_arrs) == apply_multi_broadcast(f, dim1_arrs)))
# True
# True
%timeit apply_multi_broadcast_1d(f, dim1_arrs)
# 100 loops, best of 3: 10 ms per loop
%timeit apply_multi_outer(f, dim1_arrs)
# 1 loop, best of 3: 538 ms per loop
%timeit apply_multi_broadcast(f, dim1_arrs)
# 100 loops, best of 3: 10.1 ms per loop
In my experience, in most cases we are not looking for a truly general solution. Of course, such a general solution seems elegant and desirable, as it will be inherently able to adapt, should our requirements change -- as they do quite often when writing reasearch code.
However, instead we are usually actually looking for a solution that is easy to understand and easy to modify, should our requirements change.
One such solution is to use np.einsum():
import numpy as np
a = np.array([1, 2])
b = np.array([3, 4, 5])
c = np.array([6, 7, 8, 9])
np.einsum('a,b,c->abc', a, b, c)
# array([[[18, 21, 24, 27],
# [24, 28, 32, 36],
# [30, 35, 40, 45]],
#
# [[36, 42, 48, 54],
# [48, 56, 64, 72],
# [60, 70, 80, 90]]])

numpy overlap ratio of rectangles

I have an array containing arrays of coordinates like this:
a = [[0,0,300,400],[1,1,15,59],[5,5,300,400]]
Now I want to get the overlap ratio of each rectangle to the other rectangles:
def bool_rect_intersect(A, B):
return not (B[0]>A[2] or B[2]<A[0] or B[3]<A[1] or B[1]>A[3])
def get_overlap_ratio(A, B):
in_ = bool_rect_intersect(A, B)
if not in_:
return 0
else:
left = max(A[0], B[0]);
top = max(A[1], B[1]);
right = min(A[2], B[2]);
bottom = min(A[3], B[3]);
intersection = [left, top, right, bottom];
surface_intersection = (intersection[2]-intersection[0])*(intersection[3]-intersection[1]);
surface_A = (A[2]- A[0])*(A[3]-A[1]) + 0.0;
return surface_intersection / surface_A
Now i'm looking for the fastest way to compute the grid of overlaps for arrays of size 2000+.
If I loop over it it takes more than a minute. I tried np.vectorize, but i don't think this is applicable in a multidimensional array
Approach #1 : Here's a vectorized approach -
def pairwise_overlaps(a):
r,c = np.triu_indices(a.shape[0],1)
lt = np.maximum(a[r,:2], a[c,:2])
tb = np.minimum(a[r,2:], a[c,2:])
si_vectorized = (tb[:,0] - lt[:,0]) * (tb[:,1] - lt[:,1])
slicedA_comps = ((a[:,2]- a[:,0])*(a[:,3]-a[:,1]) + 0.0)
sA_vectorized = np.take(slicedA_comps, r)
return si_vectorized/sA_vectorized
Sample run -
In [48]: a
Out[48]:
array([[ 0, 0, 300, 400],
[ 1, 1, 15, 59],
[ 5, 5, 300, 400]])
In [49]: print get_overlap_ratio(a[0], a[1]) # Looping thru pairs
...: print get_overlap_ratio(a[0], a[2])
...: print get_overlap_ratio(a[1], a[2])
...:
0.00676666666667
0.971041666667
0.665024630542
In [50]: pairwise_overlaps(a) # Proposed app to get all those in one-go
Out[50]: array([ 0.00676667, 0.97104167, 0.66502463])
Approach #2 : Upon close inspection, we will see that in the previous approach, the indexing with the r's and c's would be performance killers as they will make copies. We can improve on this, by performing computations for each element in a column against each of other elements in the same column, as listed in the implementation below -
def pairwise_overlaps_v2(a):
rl = np.minimum(a[:,2], a[:,None,2]) - np.maximum(a[:,0], a[:,None,0])
bt = np.minimum(a[:,3], a[:,None,3]) - np.maximum(a[:,1], a[:,None,1])
si_vectorized2D = rl*bt
slicedA_comps = ((a[:,2]- a[:,0])*(a[:,3]-a[:,1]) + 0.0)
overlaps2D = si_vectorized2D/slicedA_comps[:,None]
r = np.arange(a.shape[0])
tril_mask = r[:,None] < r
return overlaps2D[tril_mask]
Runtime test
In [238]: n = 1000
In [239]: a = np.hstack((np.random.randint(0,100,(n,2)), \
np.random.randint(300,500,(n,2))))
In [240]: np.allclose(pairwise_overlaps(a), pairwise_overlaps_v2(a))
Out[240]: True
In [241]: %timeit pairwise_overlaps(a)
10 loops, best of 3: 35.2 ms per loop
In [242]: %timeit pairwise_overlaps_v2(a)
100 loops, best of 3: 16 ms per loop
Let's add in the original approach as loop-comprehension -
In [244]: r,c = np.triu_indices(a.shape[0],1)
In [245]: %timeit [get_overlap_ratio(a[r[i]], a[c[i]]) for i in range(len(r))]
1 loops, best of 3: 2.85 s per loop
Around 180x speedup there with the second approach over the original one!

Dot product of two sparse matrices affecting zero values only

I'm trying to compute a simple dot product but leave nonzero values from the original matrix unchanged. A toy example:
import numpy as np
A = np.array([[2, 1, 1, 2],
[0, 2, 1, 0],
[1, 0, 1, 1],
[2, 2, 1, 0]])
B = np.array([[ 0.54331039, 0.41018682, 0.1582158 , 0.3486124 ],
[ 0.68804647, 0.29520239, 0.40654206, 0.20473451],
[ 0.69857579, 0.38958572, 0.30361365, 0.32256483],
[ 0.46195299, 0.79863505, 0.22431876, 0.59054473]])
Desired outcome:
C = np.array([[ 2. , 1. , 1. , 2. ],
[ 2.07466874, 2. , 1. , 0.73203386],
[ 1. , 1.5984076 , 1. , 1. ],
[ 2. , 2. , 1. , 1.42925865]])
The actual matrices in question, however, are sparse and look more like this:
A = sparse.rand(250000, 1700, density=0.001, format='csr')
B = sparse.rand(1700, 1700, density=0.02, format='csr')
One simple way go would be just setting the values using mask index, like that:
mask = A != 0
C = A.dot(B)
C[mask] = A[mask]
However, my original arrays are sparse and quite large, so changing them via index assignment is painfully slow. Conversion to lil matrix helps a bit, but again, conversion itself takes a lot of time.
The other obvious approach, I guess, would be just resort to iteration and skip masked values, but I'd like not to throw away the benefits of numpy/scipy-optimized array multiplication.
Some clarifications: I'm actually interested in some kind of special case, where B is always square, and therefore, A and C are of the same shape. So if there's a solution that doesn't work on arbitrary arrays but fits in my case, that's fine.
UPDATE: Some attempts:
from scipy import sparse
import numpy as np
def naive(A, B):
mask = A != 0
out = A.dot(B).tolil()
out[mask] = A[mask]
return out.tocsr()
def proposed(A, B):
Az = A == 0
R, C = np.where(Az)
out = A.copy()
out[Az] = np.einsum('ij,ji->i', A[R], B[:, C])
return out
%timeit naive(A, B)
1 loops, best of 3: 4.04 s per loop
%timeit proposed(A, B)
/usr/local/lib/python2.7/dist-packages/scipy/sparse/compressed.py:215: SparseEfficiencyWarning: Comparing a sparse matrix with 0 using == is inefficient, try using != instead.
/usr/local/lib/python2.7/dist-packages/scipy/sparse/coo.pyc in __init__(self, arg1, shape, dtype, copy)
173 self.shape = M.shape
174
--> 175 self.row, self.col = M.nonzero()
176 self.data = M[self.row, self.col]
177 self.has_canonical_format = True
MemoryError:
ANOTHER UPDATE:
Couldn't make anything more or less useful out of Cython, at least without going too far away from Python. The idea was to leave the dot product to scipy and just try to set those original values as fast as possible, something like this:
cimport cython
#cython.cdivision(True)
#cython.boundscheck(False)
#cython.wraparound(False)
cpdef coo_replace(int [:] row1, int [:] col1, float [:] data1, int[:] row2, int[:] col2, float[:] data2):
cdef int N = row1.shape[0]
cdef int M = row2.shape[0]
cdef int i, j
cdef dict d = {}
for i in range(M):
d[(row2[i], col2[i])] = data2[i]
for j in range(N):
if (row1[j], col1[j]) in d:
data1[j] = d[(row1[j], col1[j])]
This was a bit better then my pre-first "naive" implementation (using .tolil()), but following hpaulj's approach, lil can be thrown out. Maybe replacing python dict with something like std::map would help.
A possibly cleaner and faster version of your naive code:
In [57]: r,c=A.nonzero() # this uses A.tocoo()
In [58]: C=A*B
In [59]: Cl=C.tolil()
In [60]: Cl[r,c]=A.tolil()[r,c]
In [61]: Cl.tocsr()
C[r,c]=A[r,c] gives an efficiency warning, but I think that's aimed more a people do that kind of assignment in loop.
In [63]: %%timeit C=A*B
...: C[r,c]=A[r,c]
...
The slowest run took 7.32 times longer than the fastest....
1000 loops, best of 3: 334 µs per loop
In [64]: %%timeit C=A*B
...: Cl=C.tolil()
...: Cl[r,c]=A.tolil()[r,c]
...: Cl.tocsr()
...:
100 loops, best of 3: 2.83 ms per loop
My A is small, only (250,100), but it looks like the round trip to lil isn't a time saver, despite the warning.
Masking with A==0 is bound to give problems when A is sparse
In [66]: Az=A==0
....SparseEfficiencyWarning...
In [67]: r1,c1=Az.nonzero()
Compared to the nonzero r for A, this r1 is much larger - the row index of all zeros in the sparse matrix; everything but the 25 nonzeros.
In [70]: r.shape
Out[70]: (25,)
In [71]: r1.shape
Out[71]: (24975,)
If I index A with that r1 I get a much larger array. In effect I am repeating each row by the number of zeros in it
In [72]: A[r1,:]
Out[72]:
<24975x100 sparse matrix of type '<class 'numpy.float64'>'
with 2473 stored elements in Compressed Sparse Row format>
In [73]: A
Out[73]:
<250x100 sparse matrix of type '<class 'numpy.float64'>'
with 25 stored elements in Compressed Sparse Row format>
I've increased the shape and number of nonzero elements by roughly 100 (the number of columns).
Defining foo, and copying Divakar's tests:
def foo(A,B):
r,c = A.nonzero()
C = A*B
C[r,c] = A[r,c]
return C
In [83]: timeit naive(A,B)
100 loops, best of 3: 2.53 ms per loop
In [84]: timeit proposed(A,B)
/...
SparseEfficiencyWarning)
100 loops, best of 3: 4.48 ms per loop
In [85]: timeit foo(A,B)
...
SparseEfficiencyWarning)
100 loops, best of 3: 2.13 ms per loop
So my version has a modest speed inprovement. As Divakar found out, changing sparsity changes the relative advantages. I expect size to also change them.
The fact that A.nonzero uses the coo format, suggests it might be feasible to construct the new array with that format. A lot of sparse code builds a new matrix via the coo values.
In [97]: Co=C.tocoo()
In [98]: Ao=A.tocoo()
In [99]: r=np.concatenate((Co.row,Ao.row))
In [100]: c=np.concatenate((Co.col,Ao.col))
In [101]: d=np.concatenate((Co.data,Ao.data))
In [102]: r.shape
Out[102]: (79,)
In [103]: C1=sparse.csr_matrix((d,(r,c)),shape=A.shape)
In [104]: C1
Out[104]:
<250x100 sparse matrix of type '<class 'numpy.float64'>'
with 78 stored elements in Compressed Sparse Row format>
This C1 has, I think, the same non-zero elements as the C constructed by other means. But I think one value is different because the r is longer. In this particular example, C and A share one nonzero element, and the coo style of input sums those, where as we'd prefer to have A values overwrite everything.
If you can tolerate this discrepancy, this is a faster way (at least for this test case):
def bar(A,B):
C=A*B
Co=C.tocoo()
Ao=A.tocoo()
r=np.concatenate((Co.row,Ao.row))
c=np.concatenate((Co.col,Ao.col))
d=np.concatenate((Co.data,Ao.data))
return sparse.csr_matrix((d,(r,c)),shape=A.shape)
In [107]: timeit bar(A,B)
1000 loops, best of 3: 1.03 ms per loop
Cracked it! Well, there's a lot of scipy stuffs specific to sparse matrices that I learnt along the way. Here's the implementation that I could muster -
# Find the indices in output array that are to be updated
R,C = ((A!=0).dot(B!=0)).nonzero()
mask = np.asarray(A[R,C]==0).ravel()
R,C = R[mask],C[mask]
# Make a copy of A and get the dot product through sliced rows and columns
# off A and B using the definition of matrix-multiplication
out = A.copy()
out[R,C] = (A[R].multiply(B[:,C].T).sum(1)).ravel()
The most expensive part seems to be element-wise multiplication and summing. On some quick timing tests, it seems that this would be good on a sparse matrices with a high degree of sparsity to beat the original dot-mask-based solution in terms of performance, which I think comes from its focus on memory efficiency.
Runtime test
Function definitions -
def naive(A, B):
mask = A != 0
out = A.dot(B).tolil()
out[mask] = A[mask]
return out.tocsr()
def proposed(A, B):
R,C = ((A!=0).dot(B!=0)).nonzero()
mask = np.asarray(A[R,C]==0).ravel()
R,C = R[mask],C[mask]
out = A.copy()
out[R,C] = (A[R].multiply(B[:,C].T).sum(1)).ravel()
return out
Timings -
In [57]: # Input matrices
...: M,N = 25000, 170
...: A = sparse.rand(M, N, density=0.001, format='csr')
...: B = sparse.rand(N, N, density=0.02, format='csr')
...:
In [58]: %timeit naive(A, B)
10 loops, best of 3: 92.2 ms per loop
In [59]: %timeit proposed(A, B)
10 loops, best of 3: 132 ms per loop
In [60]: # Input matrices with increased sparse-ness
...: M,N = 25000, 170
...: A = sparse.rand(M, N, density=0.0001, format='csr')
...: B = sparse.rand(N, N, density=0.002, format='csr')
...:
In [61]: %timeit naive(A, B)
10 loops, best of 3: 78.1 ms per loop
In [62]: %timeit proposed(A, B)
100 loops, best of 3: 8.03 ms per loop
Python isn't my main language, but I thought this was an interesting problem and I wanted to give this a stab :)
Preliminaries:
import numpy
import scipy.sparse
# example matrices and sparse versions
A = numpy.array([[1, 2, 0, 1], [1, 0, 1, 2], [0, 1, 2 ,1], [1, 2, 1, 0]])
B = numpy.array([[1,2,3,4],[1,2,3,4],[1,2,3,4],[1,2,3,4]])
A_s = scipy.sparse.lil_matrix(A)
B_s = scipy.sparse.lil_matrix(B)
So you want to convert the original problem of:
C = A.dot(B)
C[A.nonzero()] = A[A.nonzero()]
To something sparse-y.
Just to get this out of the way, the direct "sparse" translation of the above is:
C_s = A_s.dot(B_s)
C_s[A_s.nonzero()] = A_s[A_s.nonzero()]
But it sounds like you're not happy about this, as it calculates all the dot products first, which you worry might be inefficient.
So, your question is, if you find the zeros first, and only evaluate dot products on those elements, will that be faster? I.e. for a dense matrix this could be something like:
Xs, Ys = numpy.nonzero(A==0)
D = A[:]
D[Xs, Ys] = map ( lambda x,y: A[x,:].dot(B[:,y]), Xs, Ys)
Let's translate this to a sparse matrix. My main stumbling block here was finding the "Zero" indices; since A_s==0 doesn't make sense for sparse matrices, I found them this way:
Xmax, Ymax = A_s.shape
DenseSize = Xmax * Ymax
Xgrid, Ygrid = numpy.mgrid[0:Xmax, 0:Ymax]
Ygrid = Ygrid.reshape([DenseSize,1])[:,0]
Xgrid = Xgrid.reshape([DenseSize,1])[:,0]
AllIndices = numpy.array([Xgrid, Ygrid])
NonzeroIndices = numpy.array(A_s.nonzero())
ZeroIndices = numpy.array([x for x in AllIndices.T.tolist() if x not in NonzeroIndices.T.tolist()]).T
If you know of a better / faster way, by all means try it. Once we have the Zero indices, we can do a similar mapping as before:
D_s = A_s[:]
D_s[ZeroIndices[0], ZeroIndices[1]] = map ( lambda x, y : A_s[x,:].dot(B[:,y])[0], ZeroIndices[0], ZeroIndices[1] )
which gives you your sparse matrix result.
Now I don't know if this is faster or not. I mostly took a stab because it was an interesting problem, and to see if I could do it in python. In fact I suspect it might not be faster than direct whole-matrix dotproduct, because it uses listcomprehensions and mapping on a large dataset (like you say, you expect a lot of zeros). But it is an answer to your question of "how can I only calculate dot products for the zero values without doing multiplying the matrices as a whole". I'd be interested to see if you do try this how it compares in terms of speed on your datasets.
EDIT: I'm putting below an example "block processing" version based on the above, which I think should allow you to process your large dataset without problems. Let me know if it works.
import numpy
import scipy.sparse
# example matrices and sparse versions
A = numpy.array([[1, 2, 0, 1], [1, 0, 1, 2], [0, 1, 2 ,1], [1, 2, 1, 0]])
B = numpy.array([[1,2,3,4],[1,2,3,4],[1,2,3,4],[1,2,3,4]])
A_s = scipy.sparse.lil_matrix(A)
B_s = scipy.sparse.lil_matrix(B)
# Choose a grid division (i.e. how many processing blocks you want to create)
BlockGrid = numpy.array([2,2])
D_s = A_s[:] # initialise from A
Xmax, Ymax = A_s.shape
BaseBSiz = numpy.array([Xmax, Ymax]) / BlockGrid
for BIndX in range(0, Xmax, BlockGrid[0]):
for BIndY in range(0, Ymax, BlockGrid[1]):
BSizX, BSizY = D_s[ BIndX : BIndX + BaseBSiz[0], BIndY : BIndY + BaseBSiz[1] ].shape
Xgrid, Ygrid = numpy.mgrid[BIndX : BIndX + BSizX, BIndY : BIndY + BSizY]
Xgrid = Xgrid.reshape([BSizX*BSizY,1])[:,0]
Ygrid = Ygrid.reshape([BSizX*BSizY,1])[:,0]
AllInd = numpy.array([Xgrid, Ygrid]).T
NZeroInd = numpy.array(A_s[Xgrid, Ygrid].reshape((BSizX,BSizY)).nonzero()).T + numpy.array([[BIndX],[BIndY]]).T
ZeroInd = numpy.array([x for x in AllInd.tolist() if x not in NZeroInd.tolist()]).T
#
# Replace zero-values in current block
D_s[ZeroInd[0], ZeroInd[1]] = map ( lambda x, y : A_s[x,:].dot(B[:,y])[0], ZeroInd[0], ZeroInd[1] )

indexing in numpy (related to max/argmax)

Suppose I have an N-dimensional numpy array x and an (N-1)-dimensional index array m (for example, m = x.argmax(axis=-1)). I'd like to construct (N-1) dimensional array y such that y[i_1, ..., i_N-1] = x[i_1, ..., i_N-1, m[i_1, ..., i_N-1]] (for the argmax example above it would be equivalent to y = x.max(axis=-1)).
For N=3 I could achieve what I want by
y = x[np.arange(x.shape[0])[:, np.newaxis], np.arange(x.shape[1]), m]
The question is, how do I do this for an arbitrary N?
you can use indices :
firstdims=np.indices(x.shape[:-1])
And add yours :
ind=tuple(firstdims)+(m,)
Then x[ind] is what you want.
In [228]: allclose(x.max(-1),x[ind])
Out[228]: True
Here's one approach using reshaping and linear indexing to handle multi-dimensional arrays of arbitrary dimensions -
shp = x.shape[:-1]
n_ele = np.prod(shp)
y_out = x.reshape(n_ele,-1)[np.arange(n_ele),m.ravel()].reshape(shp)
Let's take a sample case with a ndarray of 6 dimensions and let's say we are using m = x.argmax(axis=-1) to index into the last dimension. So, the output would be x.max(-1). Let's verify this for the proposed solution -
In [121]: x = np.random.randint(0,9,(4,5,3,3,2,4))
In [122]: m = x.argmax(axis=-1)
In [123]: shp = x.shape[:-1]
...: n_ele = np.prod(shp)
...: y_out = x.reshape(n_ele,-1)[np.arange(n_ele),m.ravel()].reshape(shp)
...:
In [124]: np.allclose(x.max(-1),y_out)
Out[124]: True
I liked #B. M.'s solution for its elegance. So, here's a runtime test to benchmark these two -
def reshape_based(x,m):
shp = x.shape[:-1]
n_ele = np.prod(shp)
return x.reshape(n_ele,-1)[np.arange(n_ele),m.ravel()].reshape(shp)
def indices_based(x,m): ## #B. M.'s solution
firstdims=np.indices(x.shape[:-1])
ind=tuple(firstdims)+(m,)
return x[ind]
Timings -
In [152]: x = np.random.randint(0,9,(4,5,3,3,4,3,6,2,4,2,5))
...: m = x.argmax(axis=-1)
...:
In [153]: %timeit indices_based(x,m)
10 loops, best of 3: 30.2 ms per loop
In [154]: %timeit reshape_based(x,m)
100 loops, best of 3: 5.14 ms per loop

Efficiently slice windows from a 1D numpy array, around indices given by second 2D array

I want to extract multiple slices from the same 1D numpy array, where the slice indices are drawn from a random distribution. Basically, I want to achieve the following:
import numpy as np
import numpy.random
# generate some 1D data
data = np.random.randn(500)
# window size (slices are 2*winsize long)
winsize = 60
# number of slices to take from the data
inds_size = (100, 200)
# get random integers that function as indices into the data
inds = np.random.randint(low=winsize, high=len(data)-winsize, size=inds_size)
# now I want to extract slices of data, running from inds[0,0]-60 to inds[0,0]+60
sliced_data = np.zeros( (winsize*2,) + inds_size )
for k in range(inds_size[0]):
for l in range(inds_size[1]):
sliced_data[:,k,l] = data[inds[k,l]-winsize:inds[k,l]+winsize]
# sliced_data.shape is now (120, 100, 200)
The above nested loop works fine, but is very slow. In my real code, I will need to do this thousands of times, for data arrays a lot bigger than these. Is there any way to do this more efficiently?
Note that inds will always be 2D in my case, but after getting the slices I will always be summing over one of these two dimensions, so an approach that only accumulates the sum across the one dimension would be fine.
I found this question and this answer which seem almost the same. However, the question is only about a 1D indexing vector (as opposed to my 2D). Also, the answer lacks a bit of context, as I don't really understand how the suggested as_strided works. Since my problem does not seem uncommon, I thought I'd ask again in the hope of a more explanatory answer rather than just code.
Using as_strided in this way appears to be somewhat faster than Divakar's approach (20 ms vs 35 ms here), although memory usage might be an issue.
data_wins = as_strided(data, shape=(data.size - 2*winsize + 1, 2*winsize), strides=(8, 8))
inds = np.random.randint(low=0, high=data.size - 2*winsize, size=inds_size)
sliced = data_wins[inds]
sliced = sliced.transpose((2, 0, 1)) # to use the same index order as before
Strides are the steps in bytes for the index in each dimension. For example, with an array of shape (x, y, z) and a data type of size d (8 for float64), the strides will ordinarily be (y*z*d, z*d, d), so that the second index steps over whole rows of z items. Setting both values to 8, data_wins[i, j] and data_wins[j, i] will refer to the same memory location.
>>> import numpy as np
>>> from numpy.lib.stride_tricks import as_strided
>>> a = np.arange(10, dtype=np.int8)
>>> as_strided(a, shape=(3, 10 - 2), strides=(1, 1))
array([[0, 1, 2, 3, 4, 5, 6, 7],
[1, 2, 3, 4, 5, 6, 7, 8],
[2, 3, 4, 5, 6, 7, 8, 9]], dtype=int8)
Here's a vectorized approach using broadcasting -
# Get 3D offsetting array and add to inds for all indices
allinds = inds + np.arange(-60,60)[:,None,None]
# Index into data with all indices for desired output
sliced_dataout = data[allinds]
Runtime test -
In [20]: # generate some 1D data
...: data = np.random.randn(500)
...:
...: # window size (slices are 2*winsize long)
...: winsize = 60
...:
...: # number of slices to take from the data
...: inds_size = (100, 200)
...:
...: # get random integers that function as indices into the data
...: inds=np.random.randint(low=winsize,high=len(data)-winsize, size=inds_size)
...:
In [21]: %%timeit
...: sliced_data = np.zeros( (winsize*2,) + inds_size )
...: for k in range(inds_size[0]):
...: for l in range(inds_size[1]):
...: sliced_data[:,k,l] = data[inds[k,l]-winsize:inds[k,l]+winsize]
...:
10 loops, best of 3: 66.9 ms per loop
In [22]: %%timeit
...: allinds = inds + np.arange(-60,60)[:,None,None]
...: sliced_dataout = data[allinds]
...:
10 loops, best of 3: 24.1 ms per loop
Memory consumption : Compromise solution
If memory consumption is an issue, here's a compromise solution with one loop -
sliced_dataout = np.zeros( (winsize*2,) + inds_size )
for k in range(sliced_data.shape[0]):
sliced_dataout[k] = data[inds-winsize+k]

Categories

Resources