I am implementing a PCA algorithm in python with the standard svd function, i.e.,
u, s, v = svd(sigma)
The original space has 4800 dimensions, i.e., sigma is a 4800*4800 matrix.
The dimension seems not too large, but I always come across ``Memory Error'' in svd function. My computer is 32-bit windows 7.
So what is the problem? And what is the typical feature dimensions that the svd algorithm can solve?
Thanks!
You state standard svd function - but from where there is not one by standard in python you could be using one from numpy, scipy or several other places, (this is why "namespaces are a honking good idea").
From basic principles 4800*4800=23,040,000 i.e. 23 million entries, if each of those entries is a single byte, (which I doubt), then you will be using 23MB for each matrix and it looks like you are getting back 3 results so with the original that is 23*4 = 92MB but you are almost certain to be using float or double which will multiply that by 4 or 8 to give up to 1GB without any intermediate results... Win7/32 has a max memory map of just under 3 GB, (if you have that much or more fitted), to fit OS, Drivers, Graphics, etc. You will find that some implementations are more efficient than others and more tolerant of needing to use virtual memory.
As python can be run on a lot of platforms from tiny, e.g. RaspberryPi or embedded solution through various PC configurations, e.g. 512 MB DOS machine or Linux 64 bit with 32 GB of ram to a single supercomputer or a cluster of machines working as a supercomputing cluster so you are unlikely to find a simple "this works for matrices up to NxM" figure.
Related
I'm solving a problem that involves a sparse matrix. It has the three main diagonals, and a bunch of other subdiagonals. The full size of the matrix is (2048000, 2048000), but as it is quite sparse, it has only 525312000 stored elements, corresponding to about 4 GB of memory for double precision. When I create this matrix, Activity Monitor and top on my Mac both report a memory use of about 4 GB, as expected.
Next, I create an incomplete LU factorisation, to use as a preconditioner when solving the matrix system with BiCGStab. I use the following code:
from scipy.sparse.linalg import spilu
ILU = spilu(csc_matrix(L+Lr))
here, Lr is the matrix I mentioned above, L is another sparse diagonal matrix that is purely tridiagonal, and thus much smaller.
The variable ILU is of type SuperLU, and according to ILU.nnz it contains only 20384063 stored elements, which means it should take about 150 MB of memory, yet Activity Monitor and top both claim that I am now using about 8 GB of memory, where previously I was using about 4 GB. So what happened to all of that memory?
Not a particularly satisfying answer, but I posted an issue at the SciPy repo, and from the discussion it seems this is an issue on Mac only, and that there isn't much to be done about it.
https://github.com/scipy/scipy/issues/13827
I have been trying different sparse solvers available in Python 3 and comparing the performance between them and also against Octave and Matlab. I have chosen both direct and iterative approaches, I will explain this more in detail below.
To generate a proper sparse matrix, with a banded structure, a Poisson's problem is solved using finite elements with squared grids of N=250, N=500 and N=1000. This results in dimensions of a matrix A=N^2xN^2 and a vector b=N^2x1, i.e., the largest NxN is a million. If one is interested in replicating my results, I have uploaded the matrices A and the vectors b in the following link (it will expire en 30 days) Get systems used here. The matrices are stored in triplets I,J,V, i.e. the first two columns are the indices for the rows and columns, respectively, and the third column are the values corresponding to such indices. Observe that there are some values in V, which are nearly zero, are left on purpose. Still, the banded structure is preserved after a "spy" matrix command in both Matlab and Python.
For comparison, I have used the following solvers:
Matlab and Octave, direct solver: The canonical x=A\b.
Matlab and Octave, pcg solver: The preconditioned conjugated gradient, pcg solver pcg(A,b,1e-5,size(b,1)) (not preconditioner is used).
Scipy (Python), direct solver: linalg.spsolve(A, b) where A is previously formatted in csr_matrix format.
Scipy (Python), pcg solver: sp.linalg.cg(A, b, x0=None, tol=1e-05)
Scipy (Python), UMFPACK solver: spsolve(A, b) using from scikits.umfpack import spsolve. This solver is apparently available (only?) under Linux, since it make use of the libsuitesparse [Timothy Davis, Texas A&M]. In ubuntu, this has to first be installed as sudo apt-get install libsuitesparse-dev.
Furthermore, the aforementioned python solvers are tested in:
Windows.
Linux.
Mac OS.
Conditions:
Timing is done right before and after the solution of the systems. I.e., the overhead for reading the matrices is not considered.
Timing is done ten times for each system and an average and a standard deviation is computed.
Hardware:
Windows and Linux: Dell intel (R) Core(TM) i7-8850H CPU #2.6GHz 2.59GHz, 32 Gb RAM DDR4.
Mac OS: Macbook Pro retina mid 2014 intel (R) quad-core(TM) i7 2.2GHz 16 Gb Ram DDR3.
Results:
Observations:
Matlab A\b is the fastest despite being in an older computer.
There are notable differences between Linux and Windows versions. See for instance the direct solver at NxN=1e6. This is despite Linux is running under windows (WSL).
One can have a huge scatter in Scipy solvers. This is, if the same solution is run several times, one of the times can just increase more than twice.
The fastest option in python can be nearly four times slower than the Matlab running in a more limited hardware. Really?
If you want to reproduce the tests, I leave here very simple scripts.
For matlab/octave:
IJS=load('KbN1M.txt');
b=load('FbN1M.txt');
I=IJS(:,1);
J=IJS(:,2);
S=IJS(:,3);
Neval=10;
tsparse=zeros(Neval,1);
tsolve_direct=zeros(Neval,1);
tsolve_sparse=zeros(Neval,1);
tsolve_pcg=zeros(Neval,1);
for i=1:Neval
tic
A=sparse(I,J,S);
tsparse(i)=toc;
tic
x=A\b;
tsolve_direct(i)=toc;
tic
x2=pcg(A,b,1e-5,size(b,1));
tsolve_pcg(i)=toc;
end
save -ascii octave_n1M_tsparse.txt tsparse
save -ascii octave_n1M_tsolvedirect.txt tsolve_direct
save -ascii octave_n1M_tsolvepcg.txt tsolve_pcg
For python:
import time
from scipy import sparse as sp
from scipy.sparse import linalg
import numpy as np
from scikits.umfpack import spsolve, splu #NEEDS LINUX
b=np.loadtxt('FbN1M.txt')
triplets=np.loadtxt('KbN1M.txt')
I=triplets[:,0]-1
J=triplets[:,1]-1
V=triplets[:,2]
I=I.astype(int)
J=J.astype(int)
NN=int(b.shape[0])
Neval=10
time_sparse=np.zeros((Neval,1))
time_direct=np.zeros((Neval,1))
time_conj=np.zeros((Neval,1))
time_umfpack=np.zeros((Neval,1))
for i in range(Neval):
t = time.time()
A=sp.coo_matrix((V, (I, J)), shape=(NN, NN))
A=sp.csr_matrix(A)
time_sparse[i,0]=time.time()-t
t = time.time()
x=linalg.spsolve(A, b)
time_direct[i,0] = time.time() - t
t = time.time()
x2=sp.linalg.cg(A, b, x0=None, tol=1e-05)
time_conj[i,0] = time.time() - t
t = time.time()
x3 = spsolve(A, b) #ONLY IN LINUX
time_umfpack[i,0] = time.time() - t
np.savetxt('pythonlinux_n1M_tsparse.txt',time_sparse,fmt='%.18f')
np.savetxt('pythonlinux_n1M_tsolvedirect.txt',time_direct,fmt='%.18f')
np.savetxt('pythonlinux_n1M_tsolvepcg.txt',time_conj,fmt='%.18f')
np.savetxt('pythonlinux_n1M_tsolveumfpack.txt',time_umfpack,fmt='%.18f')
Is there a way to further improve sparse solution times using python? or at least be in a similar order of performance as Matlab? I am open to suggestions using C/C++ or Fortran and a wrapper for python, but I belive it will not get much better than the UMFPACK choice. Suggestions are very welcome.
P.S. I am aware of previous posts, e.g. scipy slow sparse matrix solver
Issues using the scipy.sparse.linalg linear system solvers
How to use Numba to speed up sparse linear system solvers in Python that are provided in scipy.sparse.linalg?
But I think none is as comprehensive as this one, highlighting even more issues between operative systems when using python libraries.
EDIT_1:
I add a new plot with results using the QR solver from intel MKL using a python wrapper as suggested in the comments. This is, however, still behind Matlab's performance.
To do this, one needs to add:
from sparse_dot_mkl import sparse_qr_solve_mkl
and
sparse_qr_solve_mkl(A.astype(np.float32), b.astype(np.float32))
to the scripts provided in the original post. The ".astype(np.float32)" can be omitted, and the performance gets slighlty worse (about 10 %) for this system.
I will try to answer to myself. To provide an answer, I tried an even more demanding example, with a matrix of size of (N,N) of about half a million by half a million and the corresponding vector (N,1). This, however, is much less sparse (more dense) than the one provided in the question. This matrix stored in ascii is of about 1.7 Gb, compared to the one of the example, which is of about 0.25 Gb (despite its "size" is larger). See its shape here,
Then, I tried to solve Ax=b using again Matlab, Octave and Python using the aforementioned the direct solvers from scipy, the intel MKL wrapper, the UMFPACK from Tim Davis.
My first surprise is that both Matlab and Octave could solve the systems using the A\b, which is not for certain that it is a direct solver, since it chooses the best solver based on the characteristics of the matrix, see Matlab's x=A\b. However, the python's linalg.spsolve , the MKL wrapper and the UMFPACK were throwing out-of-memory errors in Windows and Linux. In mac, the linalg.spsolve was somehow computing a solution, and alghouth it was with a very poor performance, it never through memory errors. I wonder if the memory is handled differently depending on the OS. To me, it seems that mac swapped memory to the hard drive rather than using it from the RAM. The performance of the CG solver in Python was rather poor, compared to the matlab. However, to improve the performance in the CG solver in python, one can get a huge improvement in performance if A=0.5(A+A') is computed first (if one obviously, have a symmetric system). Using a preconditioner in Python did not help. I tried using the sp.linalg.spilu method together with sp.linalg.LinearOperator to compute a preconditioner, but the performance was rather poor. In matlab, one can use the incomplete Cholesky decomposition.
For the out-of-memory problem the solution was to use an LU decomposition and solve two nested systems, such as Ax=b, A=LL', y=L\b and x=y\L'.
I put here the min. solution times,
Matlab mac, A\b = 294 s.
Matlab mac, PCG (without conditioner)= 17.9 s.
Matlab mac, PCG (with incomplete Cholesky conditioner) = 9.8 s.
Scipy mac, direct = 4797 s.
Octave, A\b = 302 s.
Octave, PCG (without conditioner)= 28.6 s.
Octave, PCG (with incomplete Cholesky conditioner) = 11.4 s.
Scipy, PCG (without A=0.5(A+A'))= 119 s.
Scipy, PCG (with A=0.5(A+A'))= 12.7 s.
Scipy, LU decomposition using UMFPACK (Linux) = 3.7 s total.
So the answer is YES, there are ways to improve the solution times in scipy. The use of the wrappers for UMFPACK (Linux) or intel MKL QR solver is highly recommended, if the memmory of the workstation allows it. Otherwise, performing A=0.5(A+A') prior to using the conjugate gradient solver can have a positive effect in the solution performance if one is dealing with symmetric systems.
Let me know if someone would be interested in having this new system, so I can upload it.
Currently I'm implementing this paper for my undergraduate theses with python, but I only use the mahalanobis metric learning (in case you're curious).
In a shortcut, I face a problem when I need to learn a matrix with the size of 67K*67K consisting of integer, by simply numpy.dot(A.T,A) where A is a random vector sized (1,67K). When I do that it's simply throw MemoryError since my PC only have 8gb ram, and the raw calculation of the memory needed is 16gb to init. Than I search for alternative and found dask.
so i moved on to dask with this dask.array.dot(A.T,A) and it's done. But than I need to do whitening transformation to that matrix, and in dask I can achieve it by get the SVD. But everytime I do that SVD, the ipython kernel dies (I assume it due to lack of memory).
this is what I do so far from init, until the kernel dies:
fv_length=512*2*66
W = da.random.randint(10,20,(fv_length),(1000,1000))
W = da.reshape(W,(1,fv_length))
W_T = W.T
Wt = da.dot(W_T,W); del W,W_T
Wt = da.reshape(Wt,(fv_length*fv_length/2,2))
U,S,Vt = da.linalg.svd(Wt); del Wt
I didn't get the U,S,and Vt yet.
Is my memory simply not enough to do these sort of things, even when I'm using dask?
or actually this is not a spec problem, but my bad memory management?
or something else?
At this point I'm desperately trying in other bigger spec PC, so I am planning to rent a bare metal server with a 32gb ram. Even if I do so, is it enough?
Generally speaking dask.array does not guarantee out-of-core operation on all computations. A square matrix-matrix multiply (or any L3 BLAS operation) is more-or-less impossible to do efficiently in small memory.
You can ask Dask to use an on-disk cache for intermediate values. See the FAQ under the question My computation fills memory, how do I spill to disk?. However this will be limited by disk-writing speeds, which are generally fairly slow.
A large memory machine and NumPy is probably the simplest way to resolve this problem. Alternatively you could try to find a different formulation of your problem.
I need to multiply two big matrices and sort their columns.
import numpy
a= numpy.random.rand(1000000, 100)
b= numpy.random.rand(300000,100)
c= numpy.dot(b,a.T)
sorted = [argsort(j)[:10] for j in c.T]
This process takes a lot of time and memory. Is there a way to fasten this process? If not how can I calculate RAM needed to do this operation? I currently have an EC2 box with 4GB RAM and no swap.
I was wondering if this operation can be serialized and I dont have to store everything in the memory.
One thing that you can do to speed things up is compile numpy with an optimized BLAS library like e.g. ATLAS, GOTO blas or Intel's proprietary MKL.
To calculate the memory needed, you need to monitor Python's Resident Set Size ("RSS"). The following commands were run on a UNIX system (FreeBSD to be precise, on a 64-bit machine).
> ipython
In [1]: import numpy as np
In [2]: a = np.random.rand(1000, 1000)
In [3]: a.dtype
Out[3]: dtype('float64')
In [4]: del(a)
To get the RSS I ran:
ps -xao comm,rss | grep python
[Edit: See the ps manual page for a complete explanation of the options, but basically these ps options make it show only the command and resident set size of all processes. The equivalent format for Linux's ps would be ps -xao c,r, I believe.]
The results are;
After starting the interpreter: 24880 kiB
After importing numpy: 34364 kiB
After creating a: 42200 kiB
After deleting a: 34368 kiB
Calculating the size;
In [4]: (42200 - 34364) * 1024
Out[4]: 8024064
In [5]: 8024064/(1000*1000)
Out[5]: 8.024064
As you can see, the calculated size matches the 8 bytes for the default datatype float64 quite well. The difference is internal overhead.
The size of your original arrays in MiB will be approximately;
In [11]: 8*1000000*100/1024**2
Out[11]: 762.939453125
In [12]: 8*300000*100/1024**2
Out[12]: 228.8818359375
That's not too bad. However, the dot product will be way too large:
In [19]: 8*1000000*300000/1024**3
Out[19]: 2235.1741790771484
That's 2235 GiB!
What you can do is split up the problem and perfrom the dot operation in pieces;
load b as an ndarray
load every row from a as an ndarray in turn.
multiply the row by every column of b and write the result to a file.
del() the row and load the next row.
This wil not make it faster, but it would make it use less memory!
Edit: In this case I would suggest writing the output file in binary format (e.g. using struct or ndarray.tofile). That would make it much easier to read a column from the file with e.g. a numpy.memmap.
What DrV and Roland Smith said are good answers; they should be listened to. My answer does nothing more than present an option to make your data sparse, a complete game-changer.
Sparsity can be extremely powerful. It would transform your O(100 * 300000 * 1000000) operation into an O(k) operation with k non-zero elements (sparsity only means that the matrix is largely zero). I know sparsity has been mentioned by DrV and disregarded as not applicable but I would guess it is.
All that needs to be done is to find a sparse representation for computing this transform (and interpreting the results is another ball game). Easy (and fast) methods include the Fourier transform or wavelet transform (both rely on similarity between matrix elements) but this problem is generalizable through several different algorithms.
Having experience with problems like this, this smells like a relatively common problem that is typically solved through some clever trick. When in a field like machine learning where these types of problems are classified as "simple," that's often the case.
YOu have a problem in any case. As Roland Smith shows you in his answer, the amount of data and number of calculations is enormous. You may not be very familiar with linear algebra, so a few words of explanation might help in understanding (and then hopefully solving) the problem.
Your arrays are both a collection of vectors with length 100. One of the arrays has 300 000 vectors, the other one 1 000 000 vectors. The dot product between these arrays means that you calculate the dot product of each possible pair of vectors. There are 300 000 000 000 such pairs, so the resulting matrix is either 1.2 TB or 2.4 TB depending on whether you use 32 or 64-bit floats.
On my computer dot multiplying a (300,100) array with a (100,1000) array takes approximately 1 ms. Extrapolating from that, you are looking at a 1000 s calculation time (depending on the number of cores).
The nice thing about taking a dot product is that you can do it piecewise. Keeping the output is then another problem.
If you were running it on your own computer, calculating the resulting matrix could be done in the following way:
create an output array as a np.memmap array onto the disk
calculate the results one row at a time (as explained by Roland Smith)
This would result in a linear file write with a largish (2.4 TB) file.
This does not require too many lines of code. However, make sure everything is transposed in a suitable way; transposing the input arrays is cheap, transposing the output is extremely expensive. Accessing the resulting huge array is cheap if you can access elements close to each other, expensive, if you access elements far away from each other.
Sorting a huge memmapped array has to be done carefully. You should use in-place sort algorithms which operate on contiguous chunks of data. The data is stored in 4 KiB chunks (512 or 1024 floats), and the fewer chunks you need to read, the better.
Now that you are not running the code in our own machine but on a cloud platform, things change a lot. Usually the cloud SSD storage is very fast with random accesses, but IO is expensive (also in terms of money). Probably the least expensive option is to calculate suitable chunks of data and send them to S3 storage for further use. The "suitable chunk" part depends on how you intend to use the data. If you need to process individual columns, then you send one or a few columns at a time to the cloud object storage.
However, a lot depends on your sorting needs. Your code looks as if you are finally only looking at a few first items of each column. If this is the case, then you should only calculate the first few items and not the full output matrix. That way you can do everything in memory.
Maybe if you tell a bit more about your sorting needs, there can be a viable way to do what you want.
Oh, one important thing: Are your matrices dense or sparse? (Sparse means they mostly contain 0's.) If your expect your output matrix to be mostly zero, that may change the game completely.
I have a program for learning Artificial Neural Network and it takes a 2-d numpy array as training data. The size of the data array I want to use is around 300,000 x 400 floats. I can't use chunking here because the library I am using (DeepLearningTutorials) takes a single numpy array as training data.
The code shows MemoryError when the RAM usage is around 1.6Gb by this process(I checked it in system monitor) but I have a total RAM of 8GB. Also, the system is Ubuntu-12.04 32-bit.
I checked for the answers ofor other similar questions but somewhere it says that there is nothing like allocating memory to your python program and somewhere the answer is not clear as to how to increase the process memory.
One interesting thing is I am running the same code on a different machine and it can take a numpy array of almost 1,500,000 x 400 floats without any problem. The basic configurations are similar except that the other machine is 64-bit and this one is 32-bit.
Could someone please give some theoretical answer as to why there is so much difference in this or is this the only reason for my problem?
A 32-bit OS can only address up to aroung 4gb of ram, while a 64-bit OS can take advantage of a lot more ram (theoretically 16.8 million terabytes). Since your OS is 32-bit, your OS can only take advantage of 4gb, so your other 4gb isn't used.
The other 64-bit machine doesn't have the 4gb ram limit, so it can take advantage of all of its installed ram.
These limits come from the fact that a 32-bit machine can only store memory address (pointers) of 32-bytes, so there are 2^32 different possible memory locations that the computer can identify. Similarly, a 64-bit machine can identify 2^64 different possible memory locations, so it can address 2^64 different bytes.