I would like to load as much data, as is safe, so that the current process works fine as well as other processess. I would prefer to use RAM only (not using swap) but any suggestions are welcome. Excessive data can be discarded. What is the proper way of doing this? If I just wait for MemoryException, the system become not operable (if using list).
data_storage = []
for data in read_next_data():
data_storage.append(data)
The data is finally to be loaded into numpy array.
psutil has a virtual_memory function that contains, beside others, an attribute representing the free memory:
>>> psutil.virtual_memory()
svmem(total=4170924032, available=1743937536, percent=58.2, used=2426986496, free=1743937536)
>>> psutil.virtual_memory().free
1743937536
That should be pretty accurate (but the function call is costly -slow- at least on Windows). The MemoryError doesn't take memory used by other proccesses into account so it's only raised if the memory of the array exceeds the total avaiable (free or not) RAM.
You may have to guess at which point you stop accumulating because the free memory can change (other processes also need some additional memory from time to time) and the conversion to numpy.array might temporarly double your used memory because at that time the list and the array must fit into your RAM.
However you can approach this also in different way:
Read in the first dataset: read_next_data().
Calculate the free memory at that point: psutil.virtual_memory().free
Use the shape of the first dataset and the dtype to calculate the shape of the array that fits easily into the RAM. Let's say it uses factor (i.e. 75%) of the avaiable free memory: rows= freeMemory * factor / (firstDataShape * memoryPerElement) that should give you the number of datasets that you read in at once.
Create an array of that shape: arr = np.empty((rows, *firstShape), dtype=firstDtype).
Load the next datasets but store them directly into your array arr[i] = next(read_next_data). That way you you don't keep these lists around and you avoid the doubled memory.
Related
I am reading in a 15Gb .csv file using the read_csv() pandas function including the iterator/chunk functionality because I need a subset of the file of about 20%.
I am doing this in PyCharm where I set the max heap size to 18Gb (although I have 16Gb RAM) and the minimum allocated memory to half of the max heap size 9Gb. Throughout this process Pycharm indicates I am using around 100Mb to 200Mb of RAM, while the Windows Task Manager indicates I am using approximately 2.5Gb of RAM which includes both the PyCharm and Python processes. I have about 45% left of my memory in the task manager.
As far as I can see there is nothing that indicates that I am running out of memory. Still while reading in this data I get a Memory error which tells me:
MemoryError: Unable to allocate array with shape (4, 8193780) and data type float64
Is there someone that can clarify this for me? I would suspect that maybe the final dataframe is larger than my RAM can handle? That would be:
( 4 * 8193780 * 8 (float64) ) / (1024**3) < 1Gb
So the above also does not seem to be the problem, or am I missing something here?
I think you are using 15 Gb of memory just to read your file, since i guess the read_csv() function access the whole file even if you specified the chunk/iterator to use 20% percent of your file, excluding that you are runninig windows and pycharm which needs at least 1 Gb of memory, so adding all the things up then you are out of memory i guess.
But those are someways to face your problem.
Verify the dtype of your array, and try to find the best one for your purpose. For example you are using float64, consider whether float32 or even float16 might be appropriate.
Consider if your computation can be done on a subset of the data. This is called subsampling. Maybe using subsampling you get a good enough model (this may be the case for a clustering algorithm like Kmean).
You may search for out-of-core solutions. This may either be rethinking your algorithm (can you split the work), or trying a solution that does it transparently.
I get a memory error when processing very large(>50Gb) file (problem: RAM memory gets full).
My solution is: I would like to read only 500 kilo bytes of data once and process( and delete it from memory and go for next 500 kb). Is there any other better solution? or If this solution seems better , how to do it with numpy array?
It is just 1/4th the code(just for an idea)
import h5py
import numpy as np
import sys
import time
import os
hdf5_file_name = r"test.h5"
dataset_name = 'IMG_Data_2'
file = h5py.File(hdf5_file_name,'r+')
dataset = file[dataset_name]
data = dataset.value
dec_array = data.flatten()
........
I get memory error at this point itsef as it trys to put in all the data to memory.
Quick answer
Numpuy.memmap allows presenting a large file on disk as a numpy array. Don't know if it allows mapping files larger than RAM+swap though. Worth a shot.
[Presentation about out-of-memory work with Python] (http://hilpisch.com/TPQ_Out_of_Memory_Analytics.html)
Longer answer
A key question is how much RAM you have (<10GB, >10GB) and what kind of processing you're doing (need to look at each element in the dataset once or need to look at the whole dataset at once).
If it's <10GB and need to look once, then your approach seems like the most decent one. It's a standard way to deal with datasets which are larger than main memory. What I'd do is increase the size of a chunk from 500kb to something closer to the amount of memory you have - perhaps half of physical RAM, but anyway, something in the GB range, but not large enough to cause swapping to disk and interfere with your algorithm. A nice optimisation would be to hold two chunks in memory at one time. One is being processes, while the other is being loaded in parallel from disk. This works because loading stuff from disk is relatively expensive, but it doesn't require much CPU work - the CPU is basically waiting for data to load. It's harder to do in Python, because of the GIL, but numpy and friends should not be affected by that, since they release the GIL during math operations. The threading package might be useful here.
If you have low RAM AND need to look at the whole dataset at once (perhaps when computing some quadratic-time ML algorithm, or even doing random accesses in the dataset), things get more complicated, and you probably won't be able to use the previous approach. Either upgrade your algorithm to a linear one, or you'll need to implement some logic to make the algorithms in numpy etc work with data on disk directly rather than have it in RAM.
If you have >10GB of RAM, you might let the operating system do the hard work for you and increase swap size enough to capture all the dataset. This way everything is loaded into virtual memory, but only a subset is loaded into physical memory, and the operating system handles the transitions between them, so everything looks like one giant block of RAM. How to increase it is OS specific though.
The memmap object can be used anywhere an ndarray is accepted. Given a memmap fp, isinstance(fp, numpy.ndarray) returns True.
Memory-mapped files cannot be larger than 2GB on 32-bit systems.
When a memmap causes a file to be created or extended beyond its current size in the filesystem, the contents of the new part are unspecified. On systems with POSIX filesystem semantics, the extended part will be filled with zero bytes.
I am trying to calculate the cosine similarity of 100,000 vectors, and each of these vectors has 200,000 dimensions.
From reading other questions I know that memmap, PyTables and h5py are my best bets for handling this kind of data, and I am currently working with two memmaps; one for reading the vectors, the other for storing the matrix of cosine similarities.
Here is my code:
import numpy as np
import scipy.spatial.distance as dist
xdim = 200000
ydim = 100000
wmat = np.memmap('inputfile', dtype = 'd', mode = 'r', shape = (xdim,ydim))
dmat = np.memmap('outputfile', dtype = 'd', mode = 'readwrite', shape = (ydim,ydim))
for i in np.arange(ydim)):
for j in np.arange(i+1,ydim):
dmat[i,j] = dist.cosine(wmat[:,i],wmat[:,j])
dmat.flush()
Currently, htop reports that I am using 224G of VIRT memory, and 91.2G of RES memory which is climbing steadily. It seems to me as if, by the end of the process, the entire output matrix will be stored in memory, which is something I'm trying to avoid.
QUESTION:
Is this a correct usage of memmaps, am I writing to the output file in a memory efficient manner (by which I mean that only the necessary parts of the in- and output files i.e. dmat[i,j] and wmat[:,i/j], are stored in memory)?
If not, what did I do wrong, and how can I fix this?
Thanks for any advice you may have!
EDIT: I just realized that htop is reporting total system memory usage at 12G, so it seems it is working after all... anyone out there who can enlighten me? RES is now at 111G...
EDIT2: The memmap is created from a 1D array consisting of lots and lots of long decimals quite close to 0, which is shaped to the desired dimensions. The memmap then looks like this.
memmap([[ 9.83721223e-03, 4.42584107e-02, 9.85033578e-03, ...,
-2.30691545e-07, -1.65070799e-07, 5.99395837e-08],
[ 2.96711345e-04, -3.84307391e-04, 4.92968462e-07, ...,
-3.41317722e-08, 1.27959347e-09, 4.46846438e-08],
[ 1.64766260e-03, -1.47337747e-05, 7.43660202e-07, ...,
7.50395136e-08, -2.51943163e-09, 1.25393555e-07],
...,
[ -1.88709000e-04, -4.29454722e-06, 2.39720287e-08, ...,
-1.53058717e-08, 4.48678211e-03, 2.48127260e-07],
[ -3.34207882e-04, -4.60275148e-05, 3.36992876e-07, ...,
-2.30274532e-07, 2.51437794e-09, 1.25837564e-01],
[ 9.24923862e-04, -1.59552854e-03, 2.68354822e-07, ...,
-1.08862665e-05, 1.71283316e-07, 5.66851420e-01]])
In terms of memory usage, there's nothing particularly wrong with what you're doing at the moment. Memmapped arrays are handled at the level of the OS - data to be written is usually held in a temporary buffer, and only committed to disk when the OS deems it necessary. Your OS should never allow you to run out of physical memory before flushing the write buffer.
I'd advise against calling flush on every iteration since this defeats the purpose of letting your OS decide when to write to disk in order to maximise efficiency. At the moment you're only writing individual float values at a time.
In terms of IO and CPU efficiency, operating on a single line at a time is almost certainly suboptimal. Reads and writes are generally quicker for large, contiguous blocks of data, and likewise your calculation will probably be much faster if you can process many lines at once using vectorization. The general rule of thumb is to process as big a chunk of your array as will fit in memory (including any intermediate arrays that are created during your computation).
Here's an example showing how much you can speed up operations on memmapped arrays by processing them in appropriately-sized chunks.
Another thing that can make a huge difference is the memory layout of your input and output arrays. By default, np.memmap gives you a C-contiguous (row-major) array. Accessing wmat by column will therefore be very inefficient, since you're addressing non-adjacent locations on disk. You would be much better off if wmat was F-contiguous (column-major) on disk, or if you were accessing it by row.
The same general advice applies to using HDF5 instead of memmaps, although bear in mind that with HDF5 you will have to handle all the memory management yourself.
Memory maps are exactly what the name says: mappings of (virtual) disk sectors into memory pages. The memory is managed by the operating system on demand. If there is enough memory, the system keeps parts of the files in memory, maybe filling up the whole memory, if there is not enough left, the system may discard pages read from file or may swap them into swap space. Normally you can rely on the OS is as efficient as possible.
I have a program which creates an array:
List1 = zeros((x, y), dtype=complex_)
Currently I am using x = 500 and y = 1000000.
I will initialize the first column of the list by some formula. Then the subsequent columns will calculate their own values based on the preceding column.
After the list is completely filled, I will then display this multidimensional array using imshow().
The size of each value (item) in the list is 24 bytes.
A sample value from the code is: 4.63829355451e-32
When I run the code with y = 10000000, it takes up too much RAM and the system stops the run. How do I solve this problem? Is there a way to save my RAM while still being able to process the list using imshow() easily? Also, how large a list can imshow() display?
There's no way to solve this problem (in any general way).
Computers (as commonly understood) have a limited amount of RAM, and they require elements to be in RAM in order to operate on them.
An complex128 array size of 10000000x500 would require around 74GiB to store. You'll need to somehow reduce the amount of data you're processing if you hope to use a regular computer to do it (as opposed to a supercomputer).
A common technique is partitioning your data and processing each partition individually (possibly on multiple computers). Depending on the problem you're trying to solve, there may be special data structures that you can use to reduce the amount of memory needed to represent the data - a good example is a sparse matrix.
It's very unusual to need this much memory - make sure to carefully consider if it's actually needed before you dwell into the extremely complex workarounds.
I need to multiply two big matrices and sort their columns.
import numpy
a= numpy.random.rand(1000000, 100)
b= numpy.random.rand(300000,100)
c= numpy.dot(b,a.T)
sorted = [argsort(j)[:10] for j in c.T]
This process takes a lot of time and memory. Is there a way to fasten this process? If not how can I calculate RAM needed to do this operation? I currently have an EC2 box with 4GB RAM and no swap.
I was wondering if this operation can be serialized and I dont have to store everything in the memory.
One thing that you can do to speed things up is compile numpy with an optimized BLAS library like e.g. ATLAS, GOTO blas or Intel's proprietary MKL.
To calculate the memory needed, you need to monitor Python's Resident Set Size ("RSS"). The following commands were run on a UNIX system (FreeBSD to be precise, on a 64-bit machine).
> ipython
In [1]: import numpy as np
In [2]: a = np.random.rand(1000, 1000)
In [3]: a.dtype
Out[3]: dtype('float64')
In [4]: del(a)
To get the RSS I ran:
ps -xao comm,rss | grep python
[Edit: See the ps manual page for a complete explanation of the options, but basically these ps options make it show only the command and resident set size of all processes. The equivalent format for Linux's ps would be ps -xao c,r, I believe.]
The results are;
After starting the interpreter: 24880 kiB
After importing numpy: 34364 kiB
After creating a: 42200 kiB
After deleting a: 34368 kiB
Calculating the size;
In [4]: (42200 - 34364) * 1024
Out[4]: 8024064
In [5]: 8024064/(1000*1000)
Out[5]: 8.024064
As you can see, the calculated size matches the 8 bytes for the default datatype float64 quite well. The difference is internal overhead.
The size of your original arrays in MiB will be approximately;
In [11]: 8*1000000*100/1024**2
Out[11]: 762.939453125
In [12]: 8*300000*100/1024**2
Out[12]: 228.8818359375
That's not too bad. However, the dot product will be way too large:
In [19]: 8*1000000*300000/1024**3
Out[19]: 2235.1741790771484
That's 2235 GiB!
What you can do is split up the problem and perfrom the dot operation in pieces;
load b as an ndarray
load every row from a as an ndarray in turn.
multiply the row by every column of b and write the result to a file.
del() the row and load the next row.
This wil not make it faster, but it would make it use less memory!
Edit: In this case I would suggest writing the output file in binary format (e.g. using struct or ndarray.tofile). That would make it much easier to read a column from the file with e.g. a numpy.memmap.
What DrV and Roland Smith said are good answers; they should be listened to. My answer does nothing more than present an option to make your data sparse, a complete game-changer.
Sparsity can be extremely powerful. It would transform your O(100 * 300000 * 1000000) operation into an O(k) operation with k non-zero elements (sparsity only means that the matrix is largely zero). I know sparsity has been mentioned by DrV and disregarded as not applicable but I would guess it is.
All that needs to be done is to find a sparse representation for computing this transform (and interpreting the results is another ball game). Easy (and fast) methods include the Fourier transform or wavelet transform (both rely on similarity between matrix elements) but this problem is generalizable through several different algorithms.
Having experience with problems like this, this smells like a relatively common problem that is typically solved through some clever trick. When in a field like machine learning where these types of problems are classified as "simple," that's often the case.
YOu have a problem in any case. As Roland Smith shows you in his answer, the amount of data and number of calculations is enormous. You may not be very familiar with linear algebra, so a few words of explanation might help in understanding (and then hopefully solving) the problem.
Your arrays are both a collection of vectors with length 100. One of the arrays has 300 000 vectors, the other one 1 000 000 vectors. The dot product between these arrays means that you calculate the dot product of each possible pair of vectors. There are 300 000 000 000 such pairs, so the resulting matrix is either 1.2 TB or 2.4 TB depending on whether you use 32 or 64-bit floats.
On my computer dot multiplying a (300,100) array with a (100,1000) array takes approximately 1 ms. Extrapolating from that, you are looking at a 1000 s calculation time (depending on the number of cores).
The nice thing about taking a dot product is that you can do it piecewise. Keeping the output is then another problem.
If you were running it on your own computer, calculating the resulting matrix could be done in the following way:
create an output array as a np.memmap array onto the disk
calculate the results one row at a time (as explained by Roland Smith)
This would result in a linear file write with a largish (2.4 TB) file.
This does not require too many lines of code. However, make sure everything is transposed in a suitable way; transposing the input arrays is cheap, transposing the output is extremely expensive. Accessing the resulting huge array is cheap if you can access elements close to each other, expensive, if you access elements far away from each other.
Sorting a huge memmapped array has to be done carefully. You should use in-place sort algorithms which operate on contiguous chunks of data. The data is stored in 4 KiB chunks (512 or 1024 floats), and the fewer chunks you need to read, the better.
Now that you are not running the code in our own machine but on a cloud platform, things change a lot. Usually the cloud SSD storage is very fast with random accesses, but IO is expensive (also in terms of money). Probably the least expensive option is to calculate suitable chunks of data and send them to S3 storage for further use. The "suitable chunk" part depends on how you intend to use the data. If you need to process individual columns, then you send one or a few columns at a time to the cloud object storage.
However, a lot depends on your sorting needs. Your code looks as if you are finally only looking at a few first items of each column. If this is the case, then you should only calculate the first few items and not the full output matrix. That way you can do everything in memory.
Maybe if you tell a bit more about your sorting needs, there can be a viable way to do what you want.
Oh, one important thing: Are your matrices dense or sparse? (Sparse means they mostly contain 0's.) If your expect your output matrix to be mostly zero, that may change the game completely.