CSR scipy matrix does not update after updating its values - python

I have the following code in python:
import numpy as np
from scipy.sparse import csr_matrix
M = csr_matrix(np.ones([2, 2],dtype=np.int32))
print(M)
print(M.data.shape)
for i in range(np.shape(mat)[0]):
for j in range(np.shape(mat)[1]):
if i==j:
M[i,j] = 0
print(M)
print(M.data.shape)
The output of the first 2 prints is:
(0, 0) 1
(0, 1) 1
(1, 0) 1
(1, 1) 1
(4,)
The code is changing the value of the same index (i==j) and setting the value to zero.
After executing the loops then the output of the last 2 prints is:
(0, 0) 0
(0, 1) 1
(1, 0) 1
(1, 1) 0
(4,)
If I understand the concept of sparse matrices correctly, it should not be the case. It should not show me the zero values and the output of last 2 prints should be like this:
(0, 1) 1
(1, 0) 1
(2,)
Does anyone have explanation for this? Am I doing something wrong?

Yes, you are trying to change elements of the matrix one by one. :)
Ok, it does work that way, though if you changed things the other way (setting a 0 to nonzero) you will get an Efficiency warning.
To keep your kind of change fast, it only changes the value in the M.data array, and does not recalculate the indices. You have to invoke a separate csr_matrix.eliminate_zeros method the clean up the matrix. To get best speed call this once at the end of the loop.
There is a csr_matrix.setdiag method that lets you set the whole diagonal with one call. It still needs the cleanup.
In [1633]: M=sparse.csr_matrix(np.arange(9).reshape(3,3))
In [1634]: M
Out[1634]:
<3x3 sparse matrix of type '<class 'numpy.int32'>'
with 8 stored elements in Compressed Sparse Row format>
In [1635]: M.A
Out[1635]:
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]], dtype=int32)
In [1636]: M.setdiag(0)
/usr/local/lib/python3.5/dist-packages/scipy/sparse/compressed.py:730: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient.
SparseEfficiencyWarning)
In [1637]: M
Out[1637]:
<3x3 sparse matrix of type '<class 'numpy.int32'>'
with 9 stored elements in Compressed Sparse Row format>
In [1638]: M.A
Out[1638]:
array([[0, 1, 2],
[3, 0, 5],
[6, 7, 0]])
In [1639]: M.data
Out[1639]: array([0, 1, 2, 3, 0, 5, 6, 7, 0])
In [1640]: M.eliminate_zeros()
In [1641]: M
Out[1641]:
<3x3 sparse matrix of type '<class 'numpy.int32'>'
with 6 stored elements in Compressed Sparse Row format>
In [1642]: M.data
Out[1642]: array([1, 2, 3, 5, 6, 7])

Related

Why isn't eliminate_zeros() removing the zero entries?

Code
import numpy as np
from scipy.sparse import csr_matrix
arr = np.array([[0,0,0], [0,0,1], [1,0,1]])
mat = csr_matrix(arr)
mat.eliminate_zeros()
print(mat.toarray())
Output
[[0 0 0]
[0 0 1]
[1 0 2]]
According to the documentation, this method removes the zero entries from the matrix. However, why are there still zeros?
From this website, I've gathered the following:
eliminate_zeros removes all zeros in your matrix from the sparsity pattern (ie. there is no value stored for that position, when before there was a vlaue stored, but it was 0).
I can still access those zero entries.
print(mat[0, 0])
The documentation should probably be more explicit. eliminate_zeros doesn't affect the logical contents of a sparse matrix at all.
eliminate_zeros changes the underlying representation of a sparse matrix without affecting its logical contents. It removes explicitly stored zeros from the data array backing the sparse matrix. It's used to reduce space consumption, and to prepare a sparse matrix for algorithms that assume there will be no explicitly stored zeros.
It does not remove logical zeros from the sparse matrix. That wouldn't be possible - you can't have a sparse matrix with a bunch of data-less holes in it. It's not like a masked array.
To complement the other answer, I'll show the underlying data storage of your sparse matrix.
In [147]: from scipy import sparse
In [148]: arr = np.array([[0,0,0], [0,0,1], [1,0,1]])
The coo format is easiest to understand
In [149]: M = sparse.coo_matrix(arr)
In [150]: M
Out[150]:
<3x3 sparse matrix of type '<class 'numpy.int64'>'
with 3 stored elements in COOrdinate format>
In [151]: print(M)
(1, 2) 1
(2, 0) 1
(2, 2) 1
The values are actually stored in 3 arrays:
In [152]: M.data,M.row,M.col
Out[152]:
(array([1, 1, 1]),
array([1, 2, 2], dtype=int32),
array([2, 0, 2], dtype=int32))
csr format changes the row/col arrays:
In [153]: Mr = M.tocsr()
In [154]: Mr.data, Mr.indices, Mr.indptr
Out[154]:
(array([1, 1, 1]),
array([2, 0, 2], dtype=int32),
array([0, 0, 1, 3], dtype=int32))
Now let's change one element of the data array:
In [155]: Mr.data[1] = 0
In [156]: Mr.data
Out[156]: array([1, 0, 1])
eliminate_zeros finds that 0, and removes it from the data structure:
In [157]: Mr.eliminate_zeros()
In [158]: Mr.data
Out[158]: array([1, 1])
In [159]: Mr.indices
Out[159]: array([2, 2], dtype=int32)
In [160]: Mr.A
Out[160]:
array([[0, 0, 0],
[0, 0, 1],
[0, 0, 1]])
In [161]: print(Mr) # show the coo style values
(1, 2) 1
(2, 2) 1
Changing the indices and indptr of a csr (changing the "the sparsity pattern") is more work than simply assigning 0 to the data. So the csr format lets you make a bunch of changes to data, and cleaning up afterwards.
Anyways, this eliminate_zeros is not something a beginning user is likely to need.

Understand the csr format

I am trying to undersand how scipy CSR works.
https://docs.scipy.org/doc/scipy/reference/sparse.html
For example, of the following matrix on https://en.wikipedia.org/wiki/Sparse_matrix
( 0 0 0 0 )
( 5 8 0 0 )
( 0 0 3 0 )
( 0 6 0 0 )
it says the CSR representation is the following.
Must V list one row after another with non-zero elements in a row list from left to right?
I can understand COL_INDEX is the column index (column 1 is indexed as 0) corresponding to elements in V.
I don't understand ROW_INDEX. Could anybody show me how the ROW_INDEX was created from the original matrix? Thanks.
V = [ 5 8 3 6 ]
COL_INDEX = [ 0 1 2 1 ]
ROW_INDEX = [ 0 0 2 3 4 ]
From the scipy manual:
csr_matrix((data, indices, indptr), [shape=(M, N)]) is the standard
CSR representation where the column indices for row i are stored in
indices[indptr[i]:indptr[i+1]] and their corresponding values are
stored in data[indptr[i]:indptr[i+1]]. If the shape parameter is not
supplied, the matrix dimensions are inferred from the index arrays.
indptr is the same as ROW_INDEX and indicies is the same as COL_INDEX.
Here is an example of a naive way to create the indices and value array. Essentially ROW_INDICES[i + 1] is the total number of non-zero entires from row 0 to i inclusive with the last entry being the total number of non-zero entries.
ROW_INDICES = [0]
COL_INDICES = []
VALS = []
for i in range(num_rows):
ROW_INDICES.append(ROW_INDICES[i])
for j in range(num_cols):
if m[i, j] > 0:
ROW_INDICES[i + 1] += 1
COL_INDICES.append(j)
VALS.append(m[i, j])
coo format
I think it's best to start with the coo definition. It's easier to understand, and widely used:
In [90]: A = np.array([[0,0,0,0],[5,8,0,0],[0,0,3,0],[0,6,0,0]])
In [91]: M = sparse.coo_matrix(A)
The values are stored in 3 attributes:
In [92]: M.row
Out[92]: array([1, 1, 2, 3], dtype=int32)
In [93]: M.col
Out[93]: array([0, 1, 2, 1], dtype=int32)
In [94]: M.data
Out[94]: array([5, 8, 3, 6])
We can make a new matrix from those 3 arrays:
In [95]: sparse.coo_matrix((_94, (_92, _93))).A
Out[95]:
array([[0, 0, 0],
[5, 8, 0],
[0, 0, 3],
[0, 6, 0]])
oops, I need to add a shape, since one column is all 0s:
In [96]: sparse.coo_matrix((_94, (_92, _93)), shape=(4,4)).A
Out[96]:
array([[0, 0, 0, 0],
[5, 8, 0, 0],
[0, 0, 3, 0],
[0, 6, 0, 0]])
Another way to display this matrix:
In [97]: print(M)
(1, 0) 5
(1, 1) 8
(2, 2) 3
(3, 1) 6
np.where(A) gives the same non-zero coordinates.
In [108]: np.where(A)
Out[108]: (array([1, 1, 2, 3]), array([0, 1, 2, 1]))
conversion to csr
Once we have coo, we can easily convert it to csr. In fact sparse often does that for us:
In [98]: Mr = M.tocsr()
In [99]: Mr.data
Out[99]: array([5, 8, 3, 6], dtype=int64)
In [100]: Mr.indices
Out[100]: array([0, 1, 2, 1], dtype=int32)
In [101]: Mr.indptr
Out[101]: array([0, 0, 2, 3, 4], dtype=int32)
Sparse does several things - it sorts the indices, sums duplicates, and replaces the row with a indptr array. Here it is actually longer than the original, but in general it will be shorter, since it has just one value per row (plus 1). But perhaps more important, most of the fast calculation routines, especially matrix multiplication, have been written using the csr format.
I've used this package a lot. MATLAB as well, where the default definition is in the coo style, but the internal storage is csc (but not as exposed to users as in scipy). But I've never tried to derive indptr from scratch. I could, but I don't need to.
csr_matrix accepts inputs in the coo format, but also in the indptr etc format. I wouldn't recommend it, unless you already have those inputs calculated (say from another matrix). It's more error prone, and probably not much faster.
Iteration with indptr
However sometimes it is useful to iterate on intptr, and perform calculations directly on the data. Often this is faster than working with the provided methods.
For example we can list the nonzero values by row:
In [104]: for i in range(Mr.shape[0]):
...: pt = slice(Mr.indptr[i], Mr.indptr[i+1])
...: print(i, Mr.indices[pt], Mr.data[pt])
...:
0 [] []
1 [0 1] [5 8]
2 [2] [3]
3 [1] [6]
Keeping the initial 0 makes this iteration easier. When the matrix is (10000,90000) there's not much incentive to reduces the size of indptr by 1.
lil format
The lil format stores the matrix in a similar manner:
In [105]: Ml = M.tolil()
In [106]: Ml.data
Out[106]: array([list([]), list([5, 8]), list([3]), list([6])], dtype=object)
In [107]: Ml.rows
Out[107]: array([list([]), list([0, 1]), list([2]), list([1])], dtype=object)
In [110]: for i,(r,d) in enumerate(zip(Ml.rows, Ml.data)):
...: print(i, r, d)
...:
0 [] []
1 [0, 1] [5, 8]
2 [2] [3]
3 [1] [6]
Because of how rows are stored, lil actually allows us to fetch a view:
In [167]: Ml.getrowview(2)
Out[167]:
<1x4 sparse matrix of type '<class 'numpy.longlong'>'
with 1 stored elements in List of Lists format>
In [168]: for i in range(Ml.shape[0]):
...: print(Ml.getrowview(i))
...:
(0, 0) 5
(0, 1) 8
(0, 2) 3
(0, 1) 6

Zero several columns in csr_matrix

Assume I have a sparse matrix:
>>> indptr = np.array([0, 2, 3, 6])
>>> indices = np.array([0, 2, 2, 0, 1, 2])
>>> data = np.array([1, 2, 3, 4, 5, 6])
>>> csr_matrix((data, indices, indptr), shape=(3, 3)).toarray()
array([[1, 0, 2],
[0, 0, 3],
[4, 5, 6]])
I want to zero column 0 and 2. Below is what I want to get:
array([[0, 0, 0],
[0, 0, 0],
[0, 5, 0]])
Below is what I tried:
sp_mat = csr_matrix((data, indices, indptr), shape=(3, 3))
zero_cols = np.array([0, 2])
sp_mat[:, zero_cols] = 0
However, I get a warning:
SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient.
Since the sp_mat I have is large, converting to lil_matrix is very slow.
What is an efficient way?
In [87]: >>> indptr = np.array([0, 2, 3, 6])
...: >>> indices = np.array([0, 2, 2, 0, 1, 2])
...: >>> data = np.array([1, 2, 3, 4, 5, 6])
...: M = sparse.csr_matrix((data, indices, indptr), shape=(3, 3))
In [88]: M
Out[88]:
<3x3 sparse matrix of type '<class 'numpy.int64'>'
with 6 stored elements in Compressed Sparse Row format>
Look at what happens with the csr assignment:
In [89]: M[:, [0, 2]] = 0
/usr/local/lib/python3.6/dist-packages/scipy/sparse/compressed.py:746: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient.
SparseEfficiencyWarning)
In [90]: M
Out[90]:
<3x3 sparse matrix of type '<class 'numpy.int64'>'
with 7 stored elements in Compressed Sparse Row format>
In [91]: M.data
Out[91]: array([0, 0, 0, 0, 0, 5, 0])
In [92]: M.indices
Out[92]: array([0, 2, 0, 2, 0, 1, 2], dtype=int32)
Not only does it give a warning, but it actually increases the number of 'sparse' terms, though most now have a 0 value. Those are only removed when we clean up:
In [93]: M.eliminate_zeros()
In [94]: M
Out[94]:
<3x3 sparse matrix of type '<class 'numpy.int64'>'
with 1 stored elements in Compressed Sparse Row format>
In the indexed assignment, csr isn't distinguishing between setting 0s and other values. It treats all the same.
I should note that the efficiency warning is given primarily to keep users from using it repeatedly (as in an iteration). For one-time actions it is overly alarmistic.
For indexed assignment, lil format is more efficient (or at least it doesn't warn about efficiency). But converting to/from that format is time consuming.
Another option is to find and set the new 0s directly, followed by a eliminate_zeros).
Another is to use a matrix multiply. I think a diagonal sparse with 0's in the right columns will do the trick.
In [103]: M
Out[103]:
<3x3 sparse matrix of type '<class 'numpy.int64'>'
with 6 stored elements in Compressed Sparse Row format>
In [104]: D = sparse.diags([0,1,0], dtype=M.dtype)
In [105]: D
Out[105]:
<3x3 sparse matrix of type '<class 'numpy.int64'>'
with 3 stored elements (1 diagonals) in DIAgonal format>
In [106]: D.A
Out[106]:
array([[0, 0, 0],
[0, 1, 0],
[0, 0, 0]])
In [107]: M1 = M*D
In [108]: M1
Out[108]:
<3x3 sparse matrix of type '<class 'numpy.int64'>'
with 1 stored elements in Compressed Sparse Row format>
In [110]: M1.A
Out[110]:
array([[0, 0, 0],
[0, 0, 0],
[0, 5, 0]], dtype=int64)
If you multiply the matrix in-place, you don't get the efficiency warning. It's only changing the values of existing non-zero term, so isn't changing the sparsity of the matrix (at least not until you eliminate zeros):
In [111]: M = sparse.csr_matrix((data, indices, indptr), shape=(3, 3))
In [112]: M[:,[0,2]] *= 0
In [113]: M
Out[113]:
<3x3 sparse matrix of type '<class 'numpy.int64'>'
with 6 stored elements in Compressed Sparse Row format>
In [114]: M.eliminate_zeros()
In [115]: M
Out[115]:
<3x3 sparse matrix of type '<class 'numpy.int64'>'
with 1 stored elements in Compressed Sparse Row format>
Matrix multiplication is the way to go.
For my very large CSR matrix (size 2M*2M), direct assignment with sp_mat[:, zero_cols] = 0 results in out of memory error. Suppose the indices of zeroed columns are marked as True in the boolean array zero_mask, then multiplying a diagonal matrix can do the job efficiently (within 3 seconds).
import scipy.sparse as sp
sp_mat=sp_mat#sp.diags((~node_mask).astype(int))
Here (~node_mask).astype(int) gives an 1-d array of 0s and 1s that specifies which columns should be kept (1) and which should be zeroed (0).

Numpy (sparse) repeated index increment

a, b are 1D numpy ndarray of same size with integer data type.
C is a 2D scipy.sparse.lil_matrix.
If the indexing [a, b] contains repeated index, does C[a, b] += np.array([1]) always increment C exactly once for each unique indexing of C by [a, b]?
Did the documentation mention this?
Example:
import scipy.sparse as ss
import numpy as np
C = ss.lil_matrix((3,2), dtype=int)
a = np.array([0, 1, 2] * 4)
b = np.array([0, 1] * 6)
C[a, b] += np.array([1])
print(C.todense(), '\n')
C[a, b] += np.array([1])
print(C.todense())
Result:
[[1 1]
[1 1]
[1 1]]
[[2 2]
[2 2]
[2 2]]
I don't know that it's documented
It's well known that dense arrays are set just once per unique index due to buffering. We have to use add.at to get unbuffered addition.
In [966]: C=sparse.lil_matrix((3,2),dtype=int)
In [967]: Ca=C.A
In [968]: Ca += 1
In [969]: Ca
Out[969]:
array([[1, 1],
[1, 1],
[1, 1]])
In [970]: Ca=C.A
In [973]: np.add.at(Ca,(a,b),1)
In [974]: Ca
Out[974]:
array([[2, 2],
[2, 2],
[2, 2]])
Your example shows that the lil indexed setting behaves in the buffered sense as well. But I'd have to look at the code to see exactly why.
It is documented that coo style inputs are summed across duplicates.
In [975]: M=sparse.coo_matrix((np.ones_like(a),(a,b)), shape=(3,2))
In [976]: print(M)
(0, 0) 1
(1, 1) 1
(2, 0) 1
(0, 1) 1
(1, 0) 1
(2, 1) 1
(0, 0) 1
(1, 1) 1
(2, 0) 1
(0, 1) 1
(1, 0) 1
(2, 1) 1
In [977]: M.A
Out[977]:
array([[2, 2],
[2, 2],
[2, 2]])
In [978]: M
Out[978]:
<3x2 sparse matrix of type '<class 'numpy.int32'>'
with 12 stored elements in COOrdinate format>
In [979]: M.tocsr()
Out[979]:
<3x2 sparse matrix of type '<class 'numpy.int32'>'
with 6 stored elements in Compressed Sparse Row format>
In [980]: M.sum_duplicates()
In [981]: M
Out[981]:
<3x2 sparse matrix of type '<class 'numpy.int32'>'
with 6 stored elements in COOrdinate format>
points are stored in coo format as entered, but when used for display or calculation (csr format) duplicates are summed.

How to hstack several sparse matrices (feature matrices)?

I have 3 sparse matrices:
In [39]:
mat1
Out[39]:
(1, 878049)
<1x878049 sparse matrix of type '<type 'numpy.int64'>'
with 878048 stored elements in Compressed Sparse Row format>
In [37]:
mat2
Out[37]:
(1, 878049)
<1x878049 sparse matrix of type '<type 'numpy.int64'>'
with 744315 stored elements in Compressed Sparse Row format>
In [35]:
mat3
Out[35]:
(1, 878049)
<1x878049 sparse matrix of type '<type 'numpy.int64'>'
with 788618 stored elements in Compressed Sparse Row format>
From the documentation, I read that it is possible to hstack, vstack, and concatenate them such type of matrices. So I tried to hstack them:
import numpy as np
matrix1 = np.hstack([[address_feature, dayweek_feature]]).T
matrix2 = np.vstack([[matrix1, pddis_feature]]).T
X = matrix2
However, the dimensions do not match:
In [41]:
X_combined_features.shape
Out[41]:
(2, 1)
Note that I am stacking such matrices since I would like to use them with a scikit-learn classification algorithm. Therefore, How should I hstack a number of different sparse matrices?.
Use the sparse versions of vstack. As general rule you need to use sparse functions and methods, not the numpy ones with similar name. sparse matrices are not subclasses of numpy ndarray.
But, your 3 three matrices do not look sparse. They are 1x878049. One has 878048 nonzero elements - that means just one 0 element.
So you could just as well turned them into dense arrays (with .toarray() or .A) and use np.hstack or np.vstack.
np.hstack([address_feature.A, dayweek_feature.A])
And don't use the double brackets. All concatenate functions take a simple list or tuple of the arrays. And that list can have more than 2 arrays
In [296]: A=sparse.csr_matrix([0,1,2,0,0,1])
In [297]: B=sparse.csr_matrix([0,0,0,1,0,1])
In [298]: C=sparse.csr_matrix([1,0,0,0,1,0])
In [299]: sparse.vstack([A,B,C])
Out[299]:
<3x6 sparse matrix of type '<class 'numpy.int32'>'
with 7 stored elements in Compressed Sparse Row format>
In [300]: sparse.vstack([A,B,C]).A
Out[300]:
array([[0, 1, 2, 0, 0, 1],
[0, 0, 0, 1, 0, 1],
[1, 0, 0, 0, 1, 0]], dtype=int32)
In [301]: sparse.hstack([A,B,C]).A
Out[301]: array([[0, 1, 2, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0]], dtype=int32)
In [302]: np.vstack([A.A,B.A,C.A])
Out[302]:
array([[0, 1, 2, 0, 0, 1],
[0, 0, 0, 1, 0, 1],
[1, 0, 0, 0, 1, 0]], dtype=int32)

Categories

Resources