Flushing memmap completely to disk [duplicate] - python

I have a huge dataset on which I wish to PCA. I am limited by RAM and computational efficency of PCA.
Therefore, I shifted to using Iterative PCA.
Dataset Size-(140000,3504)
The documentation states that This algorithm has constant memory complexity, on the order of batch_size, enabling use of np.memmap files without loading the entire file into memory.
This is really good, but unsure on how take advantage of this.
I tried load one memmap hoping it would access it in chunks but my RAM blew.
My code below ends up using a lot of RAM:
ut = np.memmap('my_array.mmap', dtype=np.float16, mode='w+', shape=(140000,3504))
clf=IncrementalPCA(copy=False)
X_train=clf.fit_transform(ut)
When I say "my RAM blew", the Traceback I see is:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\sklearn\base.py", line 433, in fit_transfo
rm
return self.fit(X, **fit_params).transform(X)
File "C:\Python27\lib\site-packages\sklearn\decomposition\incremental_pca.py",
line 171, in fit
X = check_array(X, dtype=np.float)
File "C:\Python27\lib\site-packages\sklearn\utils\validation.py", line 347, in
check_array
array = np.array(array, dtype=dtype, order=order, copy=copy)
MemoryError
How can I improve this without comprising on accuracy by reducing the batch-size?
My ideas to diagnose:
I looked at the sklearn source code and in the fit() function Source Code I can see the following. This makes sense to me, but I am still unsure about what is wrong in my case.
for batch in gen_batches(n_samples, self.batch_size_):
self.partial_fit(X[batch])
return self
Edit:
Worst case scenario I will have to write my own code for iterativePCA which batch processes by reading and closing .npy files. But that would defeat the purpose of taking advantage of already present hack.
Edit2:
If somehow I could delete a batch of processed memmap file. It would make much sense.
Edit3:
Ideally if IncrementalPCA.fit() is just using batches it should not crash my RAM. Posting the whole code, just to make sure I am not making a mistake in flushing the memmap completely to disk before.
temp_train_data=X_train[1000:]
temp_labels=y[1000:]
out = np.empty((200001, 3504), np.int64)
for index,row in enumerate(temp_train_data):
actual_index=index+1000
data=X_train[actual_index-1000:actual_index+1].ravel()
__,cd_i=pywt.dwt(data,'haar')
out[index] = cd_i
out.flush()
pca_obj=IncrementalPCA()
clf = pca_obj.fit(out)
Surprisingly, I note out.flush doesn't free my memory. Is there a way to using del out to free my memory completely and then someone pass a pointer of the file to IncrementalPCA.fit().

You have hit a problem with sklearn in a 32 bit environment. I presume you are using np.float16 because you're in a 32 bit environment and you need that to allow you to create the memmap object without numpy thowing errors.
In a 64 bit environment (tested with Python3.3 64 bit on Windows), your code just works out of the box. So, if you have a 64 bit computer available - install python 64-bit and numpy, scipy, scikit-learn 64 bit and you are good to go.
Unfortunately, if you cannot do this, there is no easy fix. I have raised an issue on github here, but it is not easy to patch. The fundamental problem is that within the library, if your type is float16, a copy of the array to memory is triggered. The detail of this is below.
So, I hope you have access to a 64 bit environment with plenty of RAM. If not, you will have to split up your array yourself and batch process it, a rather larger task...
N.B It's really good to see you going to the source to diagnose your problem :) However, if you look at the line where the code fails (from the Traceback), you will see that the for batch in gen_batches code that you found is never reached.
Detailed diagnosis:
The actual error generated by OP code:
import numpy as np
from sklearn.decomposition import IncrementalPCA
ut = np.memmap('my_array.mmap', dtype=np.float16, mode='w+', shape=(140000,3504))
clf=IncrementalPCA(copy=False)
X_train=clf.fit_transform(ut)
is
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\sklearn\base.py", line 433, in fit_transfo
rm
return self.fit(X, **fit_params).transform(X)
File "C:\Python27\lib\site-packages\sklearn\decomposition\incremental_pca.py",
line 171, in fit
X = check_array(X, dtype=np.float)
File "C:\Python27\lib\site-packages\sklearn\utils\validation.py", line 347, in
check_array
array = np.array(array, dtype=dtype, order=order, copy=copy)
MemoryError
The call to check_array(code link) uses dtype=np.float, but the original array has dtype=np.float16. Even though the check_array() function defaults to copy=False and passes this to np.array(), this is ignored (as per the docs), to satisfy the requirement that the dtype is different; therefore a copy is made by np.array.
This could be solved in the IncrementalPCA code by ensuring that the dtype was preserved for arrays with dtype in (np.float16, np.float32, np.float64). However, when I tried that patch, it only pushed the MemoryError further along the chain of execution.
The same copying problem occurs when the code calls linalg.svd() from the main scipy code and this time the error occurs during a call to gesdd(), a wrapped native function from lapack. Thus, I do not think there is a way to patch this (at least not an easy way - it is at minimum alteration of code in core scipy).

Does the following alone trigger the crash?
X_train_mmap = np.memmap('my_array.mmap', dtype=np.float16,
mode='w+', shape=(n_samples, n_features))
clf = IncrementalPCA(n_components=50).fit(X_train_mmap)
If not then you can use that model to transform (project your data iteratively) to a smaller data using batches:
X_projected_mmap = np.memmap('my_result_array.mmap', dtype=np.float16,
mode='w+', shape=(n_samples, clf.n_components))
for batch in gen_batches(n_samples, self.batch_size_):
X_batch_projected = clf.transform(X_train_mmap[batch])
X_projected_mmap[batch] = X_batch_projected
I have not tested that code but I hope that you get the idea.

Related

What does ''Failed to allocated to CPU memory" mean in MXNet?

It is weird that no such questions yet on stackoverflow.
I am using MXNet, trying to run the VQA example on my PC, using mx.io.DataIter to read the data, but
mxnet.base.MXNetError: [11:05:07] c:\projects\mxnet-distro-win\mxnet-build\src\storage\./cpu_device_storage.h:70: Failed to allocate CPU Memory
on these two code
File "E:\PyProjects\TestVqa\VQAtrainIter.py", line 71, in reset
self.nd_img.append(mx.ndarray.array(self.img, dtype=self.dtype))
File "E:\PyProjects\Pro\venv\lib\site-packages\mxnet\ndarray\utils.py", line 146, in array
return _array(source_array, ctx=ctx, dtype=dtype)
The above is my code and the below is the library code
So what is exactly the "Failed to allocated to CPU memory" mean?
The part of code of VQAtrainIter.py is as below
def reset(self):
self.curr_idx = 0
self.nd_text = []
self.nd_img = []
self.ndlabel = []
for buck in self.data:
label = self.answer # self.answer, self.img are get from npy file
self.nd_text.append(mx.ndarray.array(buck, dtype=self.dtype))
self.nd_img.append(mx.ndarray.array(self.img, dtype=self.dtype))
self.ndlabel.append(mx.ndarray.array(label, dtype=self.dtype))
Based on what you mentioned, you're using npz, which is always loaded fully into memory. DataIter does get the data by batch, but if the source of that is npz, which is in-memory itself, then everything has to fit into memory. What DataIter does, it that it allows you to not require twice as much memory because it copies a small part of the data from numpy array into mxnet.nd.NDArray for processing.

Error when using scipy.misc.imread inside the loop

I have a case similar to this question.
Faced it when training deep learning model and reading images inside the loop.
scipy.misc.imread(rawImagePath).astype(numpy.float32)
The code above is working perfectly the vast majority of the time, but sometimes I get the error:
batch 90, loading batch data...
x path: /path_to_dataset/train/9.png
2017-10-31 20:23:34.531221: W tensorflow/core/kernels/queue_base.cc:294] _0_input_producer: Skipping cancelled enqueue attempt with queue not closed
Traceback (most recent call last):
File "train.py", line 452, in <module>
batch_X = load_data(batch_X_paths)
File "/path_to_my_script/utilities.py", line 85, in load_data
x = misc.imread(batch_X_paths[i]).astype(np.float32)
TypeError: float() argument must be a string or a number
The difference is: I have a fixed set of files and I do not write new ones to the directory.
I made a little research and figured out that error comes not from
numpy.astype()
but from
PIL.Image.open()
scipy.misc uses PIL under the hood.
Also, I managed to reproduce this bug in Jupyter Notebook. I made a loop with reading/astype code inside. After few seconds (the size of my test .png is about 10MB) I interrupt cell's execution and voila!
error reproduced in Jupyter Notebook
Inside scipy.misc sources there is a code like this:
def imread(name, flatten=False, mode=None):
im = Image.open(name)
return fromimage(im, flatten=flatten, mode=mode)
Image here is a PIL object. And Image.open() gives us 0-dimension list when interrupting cell.
I have tried Evert's advice and made a loop with try/except, but sometimes it fails even after 5 attempts.
Also I tried OpenCV imread() method instead of scipy.misc and sometimes it fails too. Error text:
libpng error: incorrect data check
So... I ran out of ideas.

pyglet.graphics: IndexError on ctypes array creation

I'm developing a small game with pyglet. One centerpiece is, of course, drawing coloured rectangels. I initially did this by creating images in memory and blit()ing them, which worked fine. After noticing how ugly, roundabout and inefficent (yes, I profiled - ColorRect.draw() took significant time and became 10x more efficent through this change) this is, I've started creating vertex lists instead, via pyglet.graphics.Batch (I copied most of the code verbatim from one of the examples). Since then, I experience a weird exception in some low-level OpenGL code that I failed to find a cause for or reproduce reliably.
There is no apparent relation to gameplay events -- as in, nothing exceptional happens just before, or I constantly miss it. As the error occurs somewhere deep in the event loop, I cannot easily track down which position update causes it. Honestly, I'm stumped. Thus I'll braindump what I have found out and hope for some kind psychic.
I've tried it out on Windows 7 32 bit (I may get around to try it on Ubuntu 11.10 soon) with Python 3.2.2, with a pyglet revision 043180b64260 (pulled from Goggle Code and built from source, the 1.1.4 release is harder to install as it doesn't run 2to3 automatically, though it appears to be equally py3k-ready). I'll probably update to the latest mercurial version next, but it's only a few commits and the changes seem entirely unrelated.
The full traceback (censored some paths out of principle, but note it's in its own virtualenv):
Traceback (most recent call last):
File "<my main file>", line 152, in <module>
main()
File "<my main file>", line 148, in main
run()
File "<my main file>", line 125, in run
pyglet.app.run()
File "<virtualenv>\Lib\site-packages\pyglet\app\__init__.py", line 123, in run
event_loop.run()
File "<virtualenv>\Lib\site-packages\pyglet\app\base.py", line 135, in run
self._run_estimated()
File "<virtualenv>\Lib\site-packages\pyglet\app\base.py", line 164, in _run_estimated
timeout = self.idle()
File "<virtualenv>\Lib\site-packages\pyglet\app\base.py", line 278, in idle
window.switch_to()
File "<virtualenv>\Lib\site-packages\pyglet\window\win32\__init__.py", line 305, in switch_to
self.context.set_current()
File "<virtualenv>\Lib\site-packages\pyglet\gl\win32.py", line 213, in set_current
super(Win32Context, self).set_current()
File "<virtualenv>\Lib\site-packages\pyglet\gl\base.py", line 320, in set_current
buffers = (gl.GLuint * len(buffers))(*buffers)
IndexError: invalid index
Running with post-mortem (actively stepping through code until it happens used to be infeasible as the FPS went from 60 down to 7) pdb shows:
buffers is a list of ints; I have no idea what these represent or where they come from, but they are pulled from a list called self.object_space._doomed_textures (where self is an window object). The associated comment says this block of code releases texture scheduled for deletion. I don't think I explicitly use textures anywhere, but who knows what pyglet does under the hood. I assume these integers are the IDs or something of the textures to be destroyed.
gl.GLuint is an alias for ctypes.c_ulong; Thus (gl.GLuint * len(buffers))(*buffers) creates an ulong array of the same length and contents
I can evaluate the very same expression at the pdb prompt without errors or data corruption.
Independent experiments (outside the virtualenv and without importing pyglet) with ctypes shows that IndexError is raised if too many arguments are given to the array constructor. This makes no sense, both experimentation and logic suggest the length and argument count must always match.
Are there other cases where this exception may occur? May this be a bug of pyglet, or am I misusing the library and missed the associated warning?
Would the code which creates and maintains the vertex lists be of any use in debugging this? There's probably something wrong with it. I've already stared at it, but since I have little experience with pyglet.graphics, this was of limited use. Just leave a comment if you'd like to see the ColorRect code.
Any other ideas what might cause this?
It is a bit hard to provide a really relevant answer since there is no code provided but from what I can see from the error output.
buffers = (gl.GLuint * len(buffers))(*buffers)
So if I understand correctly, you are multiplying the size of an GLuint (4 bytes) with your actually buffers length (if initialized). Maybe that's why your Index is invalid, because it is too high?
Usually it would be ok since a buffer is in bytes, but you said that it is a list of ints?
Hope it helps

Applying SVD throws a Memory Error instantaneously?

I am trying to apply SVD on my matrix (3241 x 12596) that was obtained after some text processing (with the ultimate goal of performing Latent Semantic Analysis) and I am unable to understand why this is happening as my 64-bit machine has 16GB RAM. The moment svd(self.A) is called, it throws an error. The precise error is given below:
Traceback (most recent call last):
File ".\SVD.py", line 985, in <module>
_svd.calc()
File ".\SVD.py", line 534, in calc
self.U, self.S, self.Vt = svd(self.A)
File "C:\Python26\lib\site-packages\scipy\linalg\decomp_svd.py", line 81, in svd
overwrite_a = overwrite_a)
MemoryError
So I tried using
self.U, self.S, self.Vt = svd(self.A, full_matrices= False)
and this time, it throws the following error:
Traceback (most recent call last):
File ".\SVD.py", line 985, in <module>
_svd.calc()
File ".\SVD.py", line 534, in calc
self.U, self.S, self.Vt = svd(self.A, full_matrices= False)
File "C:\Python26\lib\site-packages\scipy\linalg\decomp_svd.py", line 71, in svd
return numpy.linalg.svd(a, full_matrices=0, compute_uv=compute_uv)
File "C:\Python26\lib\site-packages\numpy\linalg\linalg.py", line 1317, in svd
work = zeros((lwork,), t)
MemoryError
Is this supposed to be such a large matrix that Numpy cannot handle and is there something that I can do at this stage without changing the methodology itself?
Yes, the full_matrices parameter to scipy.linalg.svd is important: your input is highly rank-deficient (rank max 3,241), so you don't want to allocate the entire 12,596 x 12,596 matrix for V!
More importantly, matrices coming from text processing are likely very sparse. The scipy.linalg.svd is dense and doesn't offer truncated SVD, which results in a) tragic performance and b) lots of wasted memory.
Have a look at the sparseSVD package from PyPI, which works over sparse input and you can ask for top K factors only. Or try scipy.sparse.linalg.svd, though that's not as efficient and only available in newer versions of scipy.
Or, to avoid the gritty details completely, use a package that does efficient LSA for you transparently, such as gensim.
Apparently, as it turns out, thanks to #Ferdinand Beyer, I did not notice that I was using a 32-bit version of Python on my 64-bit machine.
Using a 64-bit version of Python and reinstalling all the libraries solved the problem.

Python/Numpy MemoryError

Basically, I am getting a memory error in python when trying to perform an algebraic operation on a numpy matrix. The variable u, is a large matrix of double (in the failing case its a 288x288x156 matrix of doubles. I only get this error in this huge case, but I am able to do this on other large matrices, just not this big). Here is the Python error:
Traceback (most recent call last):
File "S:\3D_Simulation_Data\Patient SPM Segmentation\20 pc
t perim erosion flattop\SwSim.py", line 121, in __init__
self.mainSimLoop()
File "S:\3D_Simulation_Data\Patient SPM Segmentation\20 pc
t perim erosion flattop\SwSim.py", line 309, in mainSimLoop
u = solver.solve_cg(u,b,tensors,param,fdHold,resid) # Solve the left hand si
de of the equation Au=b with conjugate gradient method to approximate u
File "S:\3D_Simulation_Data\Patient SPM Segmentation\20 pc
t perim erosion flattop\conjugate_getb.py", line 47, in solv
e_cg
u = u + alpha*p
MemoryError
u = u + alpha*p is the line of code that fails.
alpha is just a double, while u and r are the large matrices described above (both of the same size).
I don't know that much about memory errors especially in Python. Any insight/tips into solving this would be very appreciated!
Thanks
Rewrite to
p *= alpha
u += p
and this will use much less memory. Whereas p = p*alpha allocates a whole new matrix for the result of p*alpha and then discards the old p; p*= alpha does the same thing in place.
In general, with big matrices, try to use op= assignment.
Another tip I have found to avoid memory errors is to manually control garbage collection. When objects are deleted or go our of scope, the memory used for these variables isn't freed up until a garbage collection is performed. I have found with some of my code using large numpy arrays that I get a MemoryError, but that I can avoid this if I insert calls to gc.collect() at appropriate places.
You should only look into this option if using "op=" style operators etc doesn't solve your problem as it's probably not the best coding practice to have gc.collect() calls everywhere.
Your matrix has 288x288x156=12,939,264 entries, which for double could come out to 400MB in memory. numpy throwing a MemoryError at you just means that in the function you called the memory needed to perform the operation wasn't available from the OS.
If you can work with sparse matrices this might save you a lot of memory.

Categories

Resources