memory error in numpy svd - python

I am performing numpy svd
U, S, V = np.linalg.svd(A)
shape of A is :
(10000, 10000)
Due to the large size, it gives me memory error :
U, S, V = np.linalg.svd(A, full_matrices=False) # nargout=3
File "/usr/lib/python2.7/dist-packages/numpy/linalg/linalg.py", line 1319, in svd
work = zeros((lwork,), t)
MemoryError
Then how can I find svd for my matrix?

Some small tips:
Close everything else that is open on your computer. Remove all unnecessary memory hogging things in your program by setting the variables you don't need anymore to None. Say you used a big dict D for some computations earlier but don't need it anymore set D = None. Try initializing your numpy arrays with dtype=np.int32 or dtype=np.float32 to lower memory requirements.
Depending on what you need the SVD for you can also have a look at the scikit-learn package for python, they have support for many decomposition methods such as PCA and SVD together with sparse matrix support.

There is a light implementation of SVD which is called thin-SVD. It is used when your base matrix is approximately low-rank. Considering the dimensions of your matrix, it is highly likely that it is a low-rank matrix since almost all big matrices are low rank according to a paper entitled, "Why are Big Data Matrices Approximately Low Rank?" hence, thin-SVD might solve this problem by not calculating all singular values and their singular vectors. Rather it aims at finding the highest singular values.
To find the corresponding implemetation you can search for: sklearn.decomposition.TruncatedSVD¶

Related

Large sparse matrix inversion on Python

I'm currently working with a least-square algorithm on Python, regarding some geodetic calculations.
I chose Python (which is not the fastest) and it works pretty well. However, in my code, I have inversions of large sparse symmetric (non-positive definite, so can't use Cholesky) matrix to execute (image below). I currenty use np.linalg.inv() which is using the LU decomposition method.
I pretty sure there would be some optimization to do in terms of rapidity.
I thought about Cuthill-McKee algotihm to rearange the matrix and take its inverse. Do you have any ideas or advice ?
Thank you very much for your answers !
Good news is that if you're using any of the popular python libraries for linear algebra (namely, numpy), the speed of python really doesn't matter for the math – it's all done natively inside the library.
For example, when you write matrix_prod = matrix_a # matrix_b, that's not triggering a bunch of Python loops to multiply the two matrices, but using numpy's internal implementation (which I think uses the FORTRAN LAPACK library).
The scipy.sparse.linalg module has your back covered when it comes to solving sparsely stored matrices specifying sparse systems of equations. (which is what you do with the inverse of a matrix). If you want to use sparse matrices, that's your way to go – notice that there's matrices that are sparse in mathematical terms (i.e., most entries are 0), and matrices which are stored as sparse matrix, which means you avoid storing millions of zeros. Numpy itself doesn't have sparsely stored matrices, but scipy does.
If your matrix is densely stored, but mathematically sparse, i.e. you're using standard numpy ndarrays to store it, then you won't get any more rapid by implementing anything in Python. The theoretical complexity gains will be outweighed by the practical slowness of Python compared to highly optimized inversion.
Inverting a sparse matrix usually loses the sparsity. Also, you never invert a matrix if you can avoid it at all! For a sparse matrix, solving the linear equation system Ax = b, with A your matrix and b a known vector, for x, is so much faster done forward than computing A⁻¹! So,
I'm currently working with a least-square algorithm on Python, regarding some geodetic calculations.
since LS says you don't need the inverse matrix, simply don't calculate it, ever. The point of LS is finding a solution that's as close as it gets, even if your matrix isn't invertible. Which can very well be the case for sparse matrices!

Scipy Gaussian KDE : Matrix is not positive definite

I am trying to estimate the the density of a data set at certain points, using scipy.
from scipy.stats import gaussian_kde
import numpy as np
I have a dataset A of 3D points (this is just a minimal example. My actual data has many more dimensions and many more samples)
A = np.array([[0.078377 , 0.76737392, 0.45038174],
[0.65990129, 0.13154658, 0.30770917],
[0.46068406, 0.22751313, 0.28122463]])
and the points at which I want to estimate the density
B = np.array([[0.40209377, 0.21063273, 0.75885516],
[0.91709997, 0.79303252, 0.65156937]])
But I can't seem to be able to use the gaussian_kde function, as
result = gaussian_kde(A.T)(B.T)
returns
LinAlgError: Matrix is not positive definite
How do I fix this error? How do I get the density of my sample?
TL; DR:
You have highly correlated features in your data which leads to a numerical error. There are several possible ways to address this, each with pros and cons. A drop-in replacement class for gaussian_kde is proposed below.
Diagnostic
Your dataset (i.e. the matrix that you feed when creating the gaussian_kde object, not when using it), likely contains highly dependent features. This fact (possibly combined with having low numerical resolution like float32 and "too many" datapoints) causes the covariance matrix to have near-zero eigenvalues, which due to numerical error can get below zero. This in turn will break the code that uses the Cholesky decomposition on the dataset covariance matrix (see explanation for specific details).
Assuming your dataset has shape (dims, N) you can test if this is your problem via np.linalg.eigh(np.cov(dataset))[0] <= 0. If any of the outputs comes out True, let me be the first welcoming you to the club.
Treatments
The idea is to get all eigenvalues above zero.
Increasing numerical resolution to the highest float that is practical can be an easy fix and worth trying, but may not be enough.
Given the fact that this is caused by correlated features, removing datapoints doesn't help much a priori. There is a slim hope that having less numbers to crush will propagate less error, and keep the eigenvalues above zero. It's easy to implement but it discards data points.
A more involved fix is to identify the highly correlated features and merge them or ignore the "superfluous" ones. This is tricky especially if the correlations among dimensions vary from instance to instance.
Probably the cleanest way is to leave the data untouched, and lift the problematic eigenvalues to positive values. This is usually done in 2 ways:
SVD addresses the problem directly at its core: Decompose the covariance matrix and replace the negative eigenvalues with a small positive epsilon. This will put your matrix back to PD domain introducing minimal error.
If the SVD is too expensive to compute, an alternative numerical hack is to add epsilon * np.eye(D) to the covariance matrix. This has the effect of adding epsilon to each one of the eigenvalues. Again, if epsilon is small enough, this doesn't introduce that much of an error.
If you go for the last approach you'll need to tell gaussian_kde to modify its covariance matrix. This is a relatively clean way I found to do that: simply add this class to your codebase and replace gaussian_kde with GaussianKde (tested on my end, seems to work fine).
class GaussianKde(gaussian_kde):
"""
Drop-in replacement for gaussian_kde that adds the class attribute EPSILON
to the covmat eigenvalues, to prevent exceptions due to numerical error.
"""
EPSILON = 1e-10 # adjust this at will
def _compute_covariance(self):
"""Computes the covariance matrix for each Gaussian kernel using
covariance_factor().
"""
self.factor = self.covariance_factor()
# Cache covariance and inverse covariance of the data
if not hasattr(self, '_data_inv_cov'):
self._data_covariance = np.atleast_2d(np.cov(self.dataset, rowvar=1,
bias=False,
aweights=self.weights))
# we're going the easy way here
self._data_covariance += self.EPSILON * np.eye(
len(self._data_covariance))
self._data_inv_cov = np.linalg.inv(self._data_covariance)
self.covariance = self._data_covariance * self.factor**2
self.inv_cov = self._data_inv_cov / self.factor**2
L = np.linalg.cholesky(self.covariance * 2 * np.pi)
self._norm_factor = 2*np.log(np.diag(L)).sum() # needed for scipy 1.5.2
self.log_det = 2*np.log(np.diag(L)).sum() # changed var name on 1.6.2
Explanation
In case your error is similar, but not quite that, or anyone feels curious, here is the process I followed, hopefully it helps.
The exception stack specified that the error was triggered during a Cholesky decomposition. In my case, this was this line inside the _compute_covariance method.
Indeed, after the exception, checking the Eigenvalues for the covariance and inv_cov attributes via np.eigh showed that covariance had a close-to-zero negative eigenvalue, and hence its inverse had a huge one. Since Cholesky expects PD matrices, this triggered an error.
At this point we can heavily suspect that the tiny, negative eigenvalue is a numerical error, since covariance matrices are PSD. Indeed, the error source comes when the covariance matrix is originally computed from the dataset that has been passed to the constructor, here. In my case, the covariance matrix yielded at least 2 almost identical columns, which led to the residual negative eigenvalue due to numerical error.
When will your dataset lead to a quasi-singular covariance matrix? That question is perfectly addressed in this other SE post. The bottom line is: If some variable is an exact linear combination of the other variables, with constant term allowed, the correlation and covariance matrices of the variables will be singular. The dependency observed in such matrix between its columns is actually that same dependency as the dependency between the variables in the data observed after the variables have been centered (their means brought to 0) or standardized (if we mean correlation rather than covariance matrix) (Kudos and +1 to ttnphns for the amazing work).

Why does it consume so much memory when I multiply two CSR matrices using scipy?

I use a Scipy CSR representation of a 800,000x350,000 Matrix, let's say its M. I want to calculate the dot product M * M[0:x].T. Now depending on the value of x the memory consumption grows. x=1 is not noticeable but if x=2000 the multiplication process takes around 8 gigabyte of RAM.
I wonder what happens when i calculate this product and why it takes so much memory in comparison to storing the sparse Matrix (around 30Mb). Is the matrix expanded for the multiplication?
By investigating the results and memory consumption over time and after each operation I found out that the reason is the result of the sparse matrix multiplication. Indeed there are many zero-values in M. But the result of M*M.T is a matrix containing only 50% zeros. Thus the result consumes a lot memory.
Example: Let's assume each row vector of M has a none-zero field at the same index but other than that is sparse. Then the result of M*M.T would not be sparse at all (no zero-values).
Nonetheless, thanks for helping.
The compile core of csr matrix multiplication is found at
https://github.com/scipy/scipy/blob/0cff7a5fe6226668729fc2551105692ce114c2b3/scipy/sparse/sparsetools/csr.h
starting around line 500, the csr_matmat... function. It includes a reference to a math paper that it is based on.
The Python code called with A*B is __mul__. Look at the version for your csr matrix to make sure that it is calling self._mul_sparse_matrix, which you will see ends up calling self.format + '_matmat_pass1' (and pass2).
Assuming it isn't resorting to the dense versions, you'll have to study the underlying algorithm to understand whether this memory use is realistic or not.

Trick SciPy into convolution for sparse diagonal systems

I'm trying to convert some code to Python but I noticed that SciPy's sparse diagonal operations are having some trouble handling systems that are diagonal.
For example the following code can be written as a per-pixel convolution, which in my C++ implementation is really fast. With overlap, it is approximately the memory access time. I would expect Python to know this because the system is diagonal.
When I try to run it in Python my terminal hogs the system resources and nearly killed my system. For example, the Matlab version goes much faster.
import numpy as np
from scipy import sparse
print(np.version.version)
color=imread('lena.png')
gray=mean(color,2)
[h,w]=gray.shape
img=gray.flatten()
ne=h*w;
L=np.ones(ne);
M=.5*np.ones(ne);
R=np.ones(ne);
Diags=[L,M,R]
mtx = sparse.spdiags(Diags, [-1,0,1], ne, ne);
#blured=np.dot(mtx,img) Dies
I understand I could rewrite it as 'for' loop going through the pixels, but is there a way I can keep a sparse data structure while keeping performance?
Use mtx.dot instead of np.dot as
blured = mtx.dot(img)
or just
blured = mtx * img # where mtx is sparse and img is `dense` or `sparse`
Two parameters of np.dot is dealt with dense even though one of them is sparse.
So, this will raise MemoryError

Computing generalized eigen values for sparse matrices in python

I am using scipy.sparse.linalg.eigsh to solve the generalized eigen value problem for a very sparse matrix and running into memory problems. The matrix is a square matrix with 1 million rows/columns, but each row has only about 25 non-zero entries. Is there a way to solve the problem without reading the entire matrix into memory, i.e. working with only blocks of the matrix in memory at a time?
It's ok if the solution involves using a different library in python or in java.
For ARPACK, you only need to code up a routine that computes certain matrix-vector products. This can be implemented in any way you like, for instance reading the matrix from the disk.
from scipy.sparse.linalg import LinearOperator
def my_matvec(x):
y = compute matrix-vector product A x
return y
A = LinearOperator(matvec=my_matvec, shape=(1000000, 1000000))
scipy.sparse.linalg.eigsh(A)
Check the scipy.sparse.linalg.eigsh documentation for what is needed in the generalized eigenvalue problem case.
The Scipy ARPACK interface exposes more or less the complete ARPACK interface, so I doubt you will gain much by switching to FORTRAN or some other way to access Arpack.

Categories

Resources