Element-wise maximum of two sparse matrices - python

Is there an easy/build-in way to get the element-wise maximum of two (or ideally more) sparse matrices? I.e. a sparse equivalent of np.maximum.

This did the trick:
def maximum (A, B):
BisBigger = A-B
BisBigger.data = np.where(BisBigger.data < 0, 1, 0)
return A - A.multiply(BisBigger) + B.multiply(BisBigger)

No, there's no built-in way to do this in scipy.sparse. The easy solution is
np.maximum(X.A, Y.A)
but this is obviously going to be very memory-intensive when the matrices have large dimensions and it might crash your machine. A memory-efficient (but by no means fast) solution is
# convert to COO, if necessary
X = X.tocoo()
Y = Y.tocoo()
Xdict = dict(((i, j), v) for i, j, v in zip(X.row, X.col, X.data))
Ydict = dict(((i, j), v) for i, j, v in zip(Y.row, Y.col, Y.data))
keys = list(set(Xdict.iterkeys()).union(Ydict.iterkeys()))
XmaxY = [max(Xdict.get((i, j), 0), Ydict.get((i, j), 0)) for i, j in keys]
XmaxY = coo_matrix((XmaxY, zip(*keys)))
Note that this uses pure Python instead of vectorized idioms. You can try shaving some of the running time off by vectorizing parts of it.

Here's another memory-efficient solution that should be a bit quicker than larsmans'. It's based on finding the set of unique indices for the nonzero elements in the two arrays using code from Jaime's excellent answer here.
import numpy as np
from scipy import sparse
def sparsemax(X, Y):
# the indices of all non-zero elements in both arrays
idx = np.hstack((X.nonzero(), Y.nonzero()))
# find the set of unique non-zero indices
idx = tuple(unique_rows(idx.T).T)
# take the element-wise max over only these indices
X[idx] = np.maximum(X[idx].A, Y[idx].A)
return X
def unique_rows(a):
void_type = np.dtype((np.void, a.dtype.itemsize * a.shape[1]))
b = np.ascontiguousarray(a).view(void_type)
idx = np.unique(b, return_index=True)[1]
return a[idx]
Testing:
def setup(n=1000, fmt='csr'):
return sparse.rand(n, n, format=fmt), sparse.rand(n, n, format=fmt)
X, Y = setup()
Z = sparsemax(X, Y)
print np.all(Z.A == np.maximum(X.A, Y.A))
# True
%%timeit X, Y = setup()
sparsemax(X, Y)
# 100 loops, best of 3: 4.92 ms per loop

The latest scipy (13.0) defines element-wise booleans for sparse matricies. So:
BisBigger = B>A
A - A.multiply(BisBigger) + B.multiply(BisBigger)
np.maximum does not (yet) work because it uses np.where, which is still trying to get the truth value of an array.
Curiously B>A returns a boolean dtype, while B>=A is float64.

Here is a function that returns a sparse matrix that is element-wise maximum of two sparse matrices. It implements the answer by hpaulj:
def sparse_max(A, B):
"""
Return the element-wise maximum of sparse matrices `A` and `B`.
"""
AgtB = (A > B).astype(int)
M = AgtB.multiply(A - B) + B
return M
Testing:
A = sparse.csr_matrix(np.random.randint(-9,10, 25).reshape((5,5)))
B = sparse.csr_matrix(np.random.randint(-9,10, 25).reshape((5,5)))
M = sparse_max(A, B)
M2 = sparse_max(B, A)
# Test symmetry:
print((M.A == M2.A).all())
# Test that M is larger or equal to A and B, element-wise:
print((M.A >= A.A).all())
print((M.A >= B.A).all())

from scipy import sparse
from numpy import array
I = array([0,3,1,0])
J = array([0,3,1,2])
V = array([4,5,7,9])
A = sparse.coo_matrix((V,(I,J)),shape=(4,4))
A.data.max()
9
If you haven't already, you should try out ipython, you could have saved your self time my making your spare matrix A then simply typing A. then tab, this will print a list of methods that you can call on A. From this you would see A.data gives you the non-zero entries as an array and hence you just want the maximum of this.

Related

Using python/numpy to create a complex matrix

Using python/numpy, I would like to create a 2D matrix M whose components are:
I know I can do this with a bunch of for loops but is there a better way to do this by using numpy (not using for loops)?
This is how I tried, which end up giving me a value error.
I tried to first define a function that takes the sum over k:
define sum_function(i,j):
initial_array = np.arange(g(i,j),h(i,j)+1)
applied_array = f(i,j,initial_array)
return applied_array.sum()
then I tried to create the M matrix with np.mgrid as follows:
ii, jj = np.mgrid(start:fin, start:fin)
M_matrix = sum_function(ii,jj)
--
(Edited)
Let me write down the concrete form of a matrix as an example:
M_{i,j} = \sum_{k=min(i,j)}^{i+j}\sin{\left( (i+j)^k \right)}
if i,j = 0,1, then this matrix is 2 by 2 and it's form will be
\bigl(\begin{smallmatrix}
\sin(0) & \sin(1) \
\sin(1)& \sin(2)+\sin(4)
\end{smallmatrix}\bigr)
Now if the matrix gets really big, how would I create this matrix without using for loops?
To simplify thinking, lets ravel the i,j dimensions to one, ij dimension. Can we evaluate 3 arrays:
G = g(ij) # for all ij values
H = h(ij)
F = f(ij, kk) # for all ij, and all kk
In other words, can g,h,f be evaluated at multiple values, to produce whole-arrays?
If the G and H values were the same for all ij, or subsets (preferably slices), then
F[:, G:H].sum(axis=1)
would be the value for all ij.
If the H-G difference, the size of each slice, was the same, then we can construct a 2d indexing array, GH such that
F[:, GH].sum(axis=1)
In other words we are summing constant size windows of the F rows.
But if the H-G differences vary across ij, I think we are stuck with doing the sum for each ij element separately - with Python level loops, or ones complied with numba or cython.
I think I myself found an answer to this. I first create 3D array F_{i,j,k} = f(i,j,k). And then create a mask_array whose component is Ture if g(i,j) < k < f(i,j), False otherwise. Then I compute the element-wise multiplication of these two arrays, F*mask_array, and then taking the sum over k axis.
For example, this matrix can be efficiently created by the following code.
M_{i,j} = \sum_{k=min(i,j)}^{i+j}\sin{\left( (i+j)^k \right)}
#in this example, g(i,j) = min(i,j) and h(i,j) = i+j f(i,j,k) = sin((i+j)^k)
# 0<= i, j <= 2
#kk should range from min g(i,j) to max h(i,j)
ii, jj, kk = np.mgrid[0:3,0:3,0:5]
# k > g(i,j)
frm1 = kk >= jj
frm2 = kk >= ii
frm = np.logical_or(frm1,frm2)
# k < h(i,j)
to = kk <= ii+jj
#mask
k_mask = np.logical_and(frm,to)
def f(i,j,k):
return np.sin((i+j)**k)
M_before_mask = f(ii,jj,kk)
#Matrix created
M_matrix = (M_before_mask*k_mask).sum(axis=2)

How do I convert this Matlab code with meshgrid and arrays to Python code?

I am attempting to write a program which constructs a matrix and performs a singular value decomposition on it. I am evaluating the function ax^2 +bx + 1 on a grid. I then make a uniform meshgrid of a and b. The rows of the matrix correspond to different quadratic coefficients, while each column corresponds to a grid point at which the function is evaluated.
The matlab code is here:
% Collect data
x = linspace(-1,1,100);
[a,b] = meshgrid(0:0.1:1,0:0.1:1);
D=zeros(numel(x),numel(a));
sz = size(D)
% Build “Dose” matrix
for i=1:numel(a)
D(:,i) = a(i)*x.^2+b(i)*x+1;
end
% Do the SVD:
[U,S,V]=svd(D,'econ');
D_reconstructed = U*S*V';
plot(diag(S))
scatter3(a(:),b(:),V(:,1))
This is my attempt at a solution:
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(-1, 1, 100)
def f(x, a, b):
return a*x*x + b*x + 1
a, b = np.mgrid[0:1:0.1,0:1:0.1]
#a = b = np.arange(0,1,0.01)
D = np.zeros((x.size, a.size))
for i in range(a.size):
D[i] = a[i]*x*x +b[i]*x +1
U, S, V = np.linalg.svd(D)
plt.plot(np.diag(S))
fig = plt.figure()
ax = plt.axes(projection="3d")
ax.scatter(a, b, V[0])
but I always get broadcasting errors which I am not sure how to fix.
Firstly, in MATLAB you're assigning to D(:,i), but in python you're assigning to D[i]. The latter is equivalent to D[i, ...] which is in your case D[i, :]. Instead you seem to need D[:, i].
Secondly, in MATLAB using a linear index into a 2d array (namely a and b) will give you flattened views. If you do that with numpy you get slices of an array instead, just as I mentioned with D[i].
You can do away with the loop with broadcasting and getting your desired 2d array by .ravelling (or reshaping) your a and b arrays:
x = np.linspace(-1, 1, 100)[:, None] # inject trailing singleton for broadcasting
a, b = np.mgrid[0:1:0.1, 0:1:0.1]
D = a.ravel() * x**2 + b.ravel() * x + 1
The way this works is that x has shape (100, 1) after we inject a trailing singleton (in MATLAB trailing singletons are implied, in numpy leading ones), and both a.ravel() and b.ravel() have shape (10*10,) which is compatible with (1, 10*10), making broadcasting possible into shape (100, 10*10). You could also replace the calls to ravel with
a, b = np.mgrid[...].reshape(2, -1)
which is a trick I sometimes use, but this is harder to read if you're unfamiliar with the pattern.
Side note: it's better to use example data where dimensions end up being of different size so that you notice if something ends up being transposed.

Is there in Python an approach faster than a double "for" cycle to evaluate a function characterized by an internal vector?

I have the following function:
k=np.linspace(0,5,100)
def f(x,y):
m=k
return sum(np.sin(m-x)*np.exp(-y**2))
I would like to obtain a 2D grid of values of f(x,y) evaluated on these two arrays:
x=np.linspace(0,4,30)
y=np.linspace(0,2,70)
Is there a way of calculation faster than a double "for" cycle like this one?
matrix=np.zeros((len(x),len(y)))
for i in range(len(x)):
for j in range(len(y)):
matrix[i,j]=f(x[i],y[j])
z=matrix.T
I tried to use the "numpy meshgrid" function in this way:
xx,yy=np.meshgrid(x, y)
z=f(xx,yy)
however I got the following error message:
ValueError: operands could not be broadcast together with shapes (100,) (70,30).
Here's a numpy approach. If we start with your original array setup,
k = np.linspace(0,5,100)
x = np.linspace(0,4,30)
y = np.linspace(0,2,70)
then
matrix = np.sin(k[:,np.newaxis] - x).sum(axis = 0)[:,np.newaxis]*np.exp(-y**2)
returns the same (30,70) "matrix" calculated with the double "for" cycle.
For reference, https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html outlines the broadcasting rules in numpy and https://www.numpy.org/devdocs/user/theory.broadcasting.html
gives a nice illustration of using these rules.
k=np.linspace(0,5,100)
x=np.linspace(0,4,30)
y=np.linspace(0,2,70)
def f(x,y):
## m=k
return sum(np.sin(k-x)*np.exp(-y**2))
# original
def g():
m = np.zeros((len(x),len(y)))
for i in range(len(x)):
for j in range(len(y)):
m[i,j]=f(x[i],y[j])
return m.T
# k.shape, x.shape, y.shape -> (100,), (30,), (70,)
# sine of each k minus each x
q = np.sin(k[:,None]-x) # q.shape -> (100,30)
# [sine of each k minus each x] times [e to the negative (y squared)]
r = np.exp(-y**2) # r.shape --> (70,)
s = q[...,None] * r # s.shape --> (100,30,70)
t = s.sum(0)
v = t.T # v.shape -> (70,30)
assert np.all(np.isclose(v,g()))
assert np.all(v == g())
Broadcasting

Multiply every matrix of a numpy array with the transpose of every other efficiently

Given a numpy array M I want to calculate the matrix product M[i] # M[j].T of every 2-combination of matrices of this array. After having applied some operations to that matrix (the product), I want to store the results in another matrix at position [i,j]. Is there a way to compute this fast without iterating over two nested loops?
I.e. what I want to avoid (because it takes literally hours) is:
import numpy as np
M = np.random.rand(7000,3,3)
r = np.zeros((len(M), len(M)))
for i in range(len(r)):
for j in range(len(r[0])):
n = M[i] # M[j].T
r[i,j] = np.linalg.norm(n)
Assuming You wanted r's shape to be (M.shape[0],M.shape[0]).
M = np.random.rand(700,3,3)
t = M.shape[0]
r = np.zeros((t, t))
This is equivalent to the first statement of your inner loop
q = M[:,None,...] # M.swapaxes(1,2)
And this completes the inner/outer loop
p = np.linalg.norm(q, axis=(2,3))
for i in range(len(r)):
for j in range(len(r[0])):
n = M[i] # M[j].T
r[i,j] = np.linalg.norm(n)
>>> np.all(np.isclose(p,r))
True
With M.shape -> (70,3,3) it is about 42 times faster than the for loop.
With M.shape -> (700,3,3) it is about 36 times faster than the for loop.
My poor computer cannot handle M.shape --> (7000,3,3) ... MemoryError.

Cumulative summation of a numpy array by index

Assume you have an array of values that will need to be summed together
d = [1,1,1,1,1]
and a second array specifying which elements need to be summed together
i = [0,0,1,2,2]
The result will be stored in a new array of size max(i)+1. So for example i=[0,0,0,0,0] would be equivalent to summing all the elements of d and storing the result at position 0 of a new array of size 1.
I tried to implement this using
c = zeros(max(i)+1)
c[i] += d
However, the += operation adds each element only once, thus giving the unexpected result of
[1,1,1]
instead of
[2,1,2]
How would one correctly implement this kind of summation?
If I understand the question correctly, there is a fast function for this (as long as the data array is 1d)
>>> i = np.array([0,0,1,2,2])
>>> d = np.array([0,1,2,3,4])
>>> np.bincount(i, weights=d)
array([ 1., 2., 7.])
np.bincount returns an array for all integers range(max(i)), even if some counts are zero
Juh_'s comment is the most efficient solution. Here's working code:
import numpy as np
import scipy.ndimage as ni
i = np.array([0,0,1,2,2])
d = np.array([0,1,2,3,4])
n_indices = i.max() + 1
print ni.sum(d, i, np.arange(n_indices))
This solution should be more efficient for large arrays (it iterates over the possible index values instead of the individual entries of i):
import numpy as np
i = np.array([0,0,1,2,2])
d = np.array([0,1,2,3,4])
i_max = i.max()
c = np.empty(i_max+1)
for j in range(i_max+1):
c[j] = d[i==j].sum()
print c
[1. 2. 7.]
def zeros(ilen):
r = []
for i in range(0,ilen):
r.append(0)
i_list = [0,0,1,2,2]
d = [1,1,1,1,1]
result = zeros(max(i_list)+1)
for index in i_list:
result[index]+=d[index]
print result
In the general case when you want to sum submatrices by labels you can use the following code
import numpy as np
from scipy.sparse import coo_matrix
def labeled_sum1(x, labels):
P = coo_matrix((np.ones(x.shape[0]), (labels, np.arange(len(labels)))))
res = P.dot(x.reshape((x.shape[0], np.prod(x.shape[1:]))))
return res.reshape((res.shape[0],) + x.shape[1:])
def labeled_sum2(x, labels):
res = np.empty((np.max(labels) + 1,) + x.shape[1:], x.dtype)
for i in np.ndindex(x.shape[1:]):
res[(...,)+i] = np.bincount(labels, x[(...,)+i])
return res
The first method use the sparse matrix multiplication. The second one is the generalization of user333700's answer. Both methods have comparable speed:
x = np.random.randn(100000, 10, 10)
labels = np.random.randint(0, 1000, 100000)
%time res1 = labeled_sum1(x, labels)
%time res2 = labeled_sum2(x, labels)
np.all(res1 == res2)
Output:
Wall time: 73.2 ms
Wall time: 68.9 ms
True

Categories

Resources