Related
I have a piece of code with a bottleneck calculation involving tensor contractions. Lets say I want to calculate a tensor A_{i,j,k,l}( X ) whose non-zero entries for a single x\in X are N ~ 10^5, and X represents a grid with M total points, with M~1000 approximately. For a single element of the tensor A, the rhs of the equation looks something like:
A_{ijkl}(M) = Sum_{m,n,p,q} S_{i,j, m,n }(M) B_{m,n,p,q}(M) T_{ p,q, k,l }(M)
In addition, the middle tensor B_{m,n,p,q}(M) is obtained by numerical convolution of arrays so that:
B_{m,n,p,q}(M) = ( L_{m,n} * F_{p,q} )(M)
where "*" is the convolution operator, and all tensors have appoximately the same number of elements as A. My problem has to do with efficiency of the sums; to compute a single rhs of A, it takes very long times given the complexity of the problem. I have a "keys" system, where each tensor element is accessed by its unique key combination ( ( p,q,k,l ) for T for example ) taken from a dictionary. Then the dictionary for that specific key gives the Numpy array associated to that key to perform an operation, and all operations (convolutions, multiplications...) are done using Numpy. I have seen that the most time consuming part is actually due to the nested loop (I loop over all keys (i,j,k,l) of the A tensor, and for each key, a rhs like the one above needs to be computed). Is there any efficient way to do this? Consider that:
1) Using simple numpy arrays of 4 +1 D results in high memory usage, since all tensors are of type complex
2 ) I have tried several approaches: Numba is quite limited when working with dictionaries, and some important Numpy features that I need are not currently supported. For instance, the numpy.convolve() only takes the first 2 arguments, but does not take the "mode" argument which reduces considerably the needed convolution interval in this case, I dont need the "full" output of the convolution
3) My most recent approach is trying to implement everything using Cython for this part... But this is quite time consuming as well as more error prone given the logic of the code.
Any ideas on how to deal with such complexity using Python?
Thanks!
You have to make your question a bit more precise, which also includes a working code example which you have already tried. It is for example unclear, why you use dictionarys in this tensor contractions. Dictionary lookups looks to be a weard thing for this calculation, but maybe I didn't get the point what you really want to do.
Tensor contraction actually is very easy to implement in Python (Numpy), there are methods to find the best way to contract the tensors and they are really easy to use (np.einsum).
Creating some data (this should be part of the question)
import numpy as np
import time
i=20
j=20
k=20
l=20
m=20
n=20
p=20
q=20
#I don't know what complex 2 means, I assume it is complex128 (real and imaginary part are in float64)
#size of all arrays is 1.6e5
Sum_=np.random.rand(m,n,p,q).astype(np.complex128)
S_=np.random.rand(i,j,m,n).astype(np.complex128)
B_=np.random.rand(m,n,p,q).astype(np.complex128)
T_=np.random.rand(p,q,k,l).astype(np.complex128)
The naive way
This code is basically the same as writing it in loops using Cython or Numba without calling BLAS routines (ZGEMM) or optimizing the contraction order -> 8 nested loops to do the job.
t1=time.time()
A=np.einsum("mnpq,ijmn,mnpq,pqkl",Sum_,S_,B_,T_)
print(time.time()-t1)
This results in a very slow runtime of about 330 seconds.
How to increase the speed by a factor of 7700
%timeit A=np.einsum("mnpq,ijmn,mnpq,pqkl",Sum_,S_,B_,T_,optimize="optimal")
#42.9 ms ± 2.71 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Why is this so much faster?
Lets have a look at the contraction path and the internals.
path=np.einsum_path("mnpq,ijmn,mnpq,pqkl",Sum_,S_,B_,T_,optimize="optimal")
print(path[1])
# Complete contraction: mnpq,ijmn,mnpq,pqkl->ijkl
# Naive scaling: 8
# Optimized scaling: 6
# Naive FLOP count: 1.024e+11
# Optimized FLOP count: 2.562e+08
# Theoretical speedup: 399.750
# Largest intermediate: 1.600e+05 elements
#--------------------------------------------------------------------------
#scaling current remaining
#--------------------------------------------------------------------------
# 4 mnpq,mnpq->mnpq ijmn,pqkl,mnpq->ijkl
# 6 mnpq,ijmn->ijpq pqkl,ijpq->ijkl
# 6 ijpq,pqkl->ijkl ijkl->ijkl
and
path=np.einsum_path("mnpq,ijmn,mnpq,pqkl",Sum_,S_,B_,T_,optimize="optimal",einsum_call=True)
print(path[1])
#[((2, 0), set(), 'mnpq,mnpq->mnpq', ['ijmn', 'pqkl', 'mnpq'], False), ((2, 0), {'n', 'm'}, 'mnpq,ijmn->ijpq', ['pqkl', 'ijpq'], True), ((1, 0), {'p', 'q'}, 'ijpq,pqkl->ijkl', ['ijkl'], True)]
Doing the contraction in multiple well choosen steps reduces the required flops by a factor of 400. But thats not the only thing what einsum does here. Just have a look at 'mnpq,ijmn->ijpq', ['pqkl', 'ijpq'], True), ((1, 0) the True stands for a BLAS contraction -> tensordot call -> (matrix matix multiplication).
Internally this looks basically as follows:
#consider X as a 4th order tensor {mnpq}
#consider Y as a 4th order tensor {ijmn}
X_=X.reshape(m*n,p*q) #-> just another view on the data (2D), costs almost nothing (no copy, just a view)
Y_=Y.reshape(i*j,m*n) #-> just another view on the data (2D), costs almost nothing (no copy, just a view)
res=np.dot(Y_,X_) #-> dot is just a wrapper for highly optimized BLAS functions, in case of complex128 ZGEMM
output=res.reshape(i,j,p,q) #-> just another view on the data (4D), costs almost nothing (no copy, just a view)
If I want to instantiate a large boolean sparse matrix to assign values at certain indices later, what's the best way to initialize it?
For example, if I want to initialize a 20000000 X 7000 logical sparse matrix on MATLAB with a 10000 filled elements (without mentioning the location of non-zero elements), I would use the following syntax:
Matrix=logical(sparse([],[],[],20000000,7000,10000))
I have no speed constraints on assigning the non-zero values later.
On Python, if I initialize it as a CSR matrix, the creation of the matrix is very fast.
Matrix=csr_matrix((20000000, 7000), dtype=bool)
CPU times: user 860 µs, sys: 2.43 ms, total: 3.29 ms
Wall time: 9.72 ms
However, when I cannot efficiently assign values to a CSR_Matrix, the operation is very slow and you see the inbuilt warning.
If I try initializing it as a LIL matrix:
Matrix=lil_matrix((20000000, 7000), dtype=bool)
CPU times: user 12.4 s, sys: 624 ms, total: 13 s
Wall time: 13 s
or converting the csr_matrix to a lil_matrix:
Matrix=csr_matrix((20000000, 7000), dtype=bool)
Matrix=Matrix.tolil()
CPU times: user 26.8 s, sys: 734 ms, total: 27.5 s
Wall time: 27.5 s
The initialization takes significantly more time.
Is there any way to speed up the initialization of the LIL matrix? If not, what sparse matrix format can I use to speed up the assignment of non-zero elements to such matrices?
If you need general incremental indexed-access, dok_matrix is probably your best bet.
It's common to use this one for construction (where it can shine in some cases) before converting to something else like csc, csr (which are usually needed for algebraic operations).
Edit: Most of the below stuff focuses on the accumulated time needed for init + filling + whatever is done after that.
In terms of your case: dok_matrix init should be quite instant.
...
Allows for efficient O(1) access of individual elements. Duplicates are not allowed. Can be efficiently converted to a coo_matrix once constructed.
That being said, it also depends on your workflow and code which was omitted. Given some structure (python-)loopless task-dependent workflows surely can beat generic (python-)looping adding one element at a time. Often, this involves coo_matrix.
In your case for some workflows: you don't have any init-time at all as you don't create a matrix a-priori, but only collect whatever is needed before creating the matrix in one batch. Not sure how that fits into your computational model (which is a bit strange: init time-restricted; further usage is free)
I used MATLAB sparse quite a bit years ago. Back then you created a sparse matrix with
S = sparse(i,j,v,m,n)
where i,j,v where matrices identifying all of the nonzero values. That extra nz parameter that preallocates 'space' for more nonzeros did not exist.
In scipy, the equivalent is
S = sparse.csc_matrix((v, (i,j)), m, n)
Again, v,i,j are fully defined arrays. There isn't any nz preallocation option. In fact given how attributes are stored, I don't see how preallocation would work or be beneficial.
As you found out, trying to define nonzero values with the csc/csr format is slow and produces the warning. lil/dok are designed to make iterative addition faster.
csr creation time depends on the number of initial nonzero values, and only marginally on the shape (the indptr array size depends on the number of rows). Normally we don't worry about initialization time for a lil, but with 20000000 rows, I can see why it would take time. It has to make two object dtype arrays, with empty list elements.
Anyway, try avoid incremental definition. Create the i,j,v arrays from your source, and then build the matrix.
I have a large csr_matrix and I am interested in the top ten values and their indices each row. But I did not find a decent way to manipulate the matrix.
Here is my current solution and the main idea is to process them row by row:
row = csr_matrix.getrow(row_number).toarray()[0].ravel()
top_ten_indicies = row.argsort()[-10:]
top_ten_values = row[row.argsort()[-10:]]
By doing this, the advantages of csr_matrix is not fully used. It's more like a brute force solution.
I don't see what the advantages of csr format are in this case. Sure, all the nonzero values are collected in one .data array, with the corresponding column indexes in .indices. But they are in blocks of varying length. And that means they can't be processed in parallel or with numpy array strides.
One solution is the pad those blocks into common length blocks. That's what .toarray() does. Then you can find the maximum values with argsort(axis=1) or withargpartition`.
Another is to break them into row sized blocks, and process each of those. That's what you are doing with the .getrow. Another way of breaking them up is convert to lil format, and process the sublists of the .data and .rows arrays.
A possible third option is to use the ufunc reduceat method. This lets you apply ufunc reduction methods to sequential blocks of an array. There are established ufunc like np.add that take advantage of this. argsort is not such a function. But there is a way of constructing a ufunc from a Python function, and gain some modest speed over regular Python iteration. [I need to look up a recent SO question that illustrates this.]
I'll illustrate some of this with a simpler function, sum over rows.
If A2 is a csr matrix.
A2.sum(axis=1) # the fastest compile csr method
A2.A.sum(axis=1) # same, but with a dense intermediary
[np.sum(l.data) for l in A2] # iterate over the rows of A2
[np.sum(A2.getrow(i).data) for i in range(A2.shape[0])] # iterate with index
[np.sum(l) for l in A2.tolil().data] # sum the sublists of lil format
np.add.reduceat(A2.data, A2.indptr[:-1]) # with reduceat
A2.sum(axis=1) is implemented as a matrix multiplication. That's not relevant to the sort problem, but still an interesting way of looking at the summation problem. Remember csr format was developed for efficient multiplication.
For a my current sample matrix (created for another SO sparse question)
<8x47752 sparse matrix of type '<class 'numpy.float32'>'
with 32 stored elements in Compressed Sparse Row format>
some comparative times are
In [694]: timeit np.add.reduceat(A2.data, A2.indptr[:-1])
100000 loops, best of 3: 7.41 µs per loop
In [695]: timeit A2.sum(axis=1)
10000 loops, best of 3: 71.6 µs per loop
In [696]: timeit [np.sum(l) for l in A2.tolil().data]
1000 loops, best of 3: 280 µs per loop
Everything else is 1ms or more.
I suggest focusing on developing your one-row function, something like:
def max_n(row_data, row_indices, n):
i = row_data.argsort()[-n:]
# i = row_data.argpartition(-n)[-n:]
top_values = row_data[i]
top_indices = row_indices[i] # do the sparse indices matter?
return top_values, top_indices, i
Then see how if fits in one of these iteration methods. tolil() looks most promising.
I haven't addressed the question of how to collect these results. Should they be lists of lists, array with 10 columns, another sparse matrix with 10 values per row, etc.?
sorting each row of a large sparse & saving top K values & column index - Similar question from several years back, but unanswered.
Argmax of each row or column in scipy sparse matrix - Recent question seeking argmax for rows of csr. I discuss some of the same issues.
how to speed up loop in numpy? - example of how to use np.frompyfunc to create a ufunc. I don't know if the resulting function has the .reduceat method.
Increasing value of top k elements in sparse matrix - get the top k elements of csr (not by row). Case for argpartition.
The row summation implemented with np.frompyfunc:
In [741]: def foo(a,b):
return a+b
In [742]: vfoo=np.frompyfunc(foo,2,1)
In [743]: timeit vfoo.reduceat(A2.data,A2.indptr[:-1],dtype=object).astype(float)
10000 loops, best of 3: 26.2 µs per loop
That's respectable speed. But I can't think of a way of writing a binary function (takes to 2 arguments) that would implement argsort via reduction. So this is probably a deadend for this problem.
Just to answer the original question (for people like me who found this question looking for copy-pasta), here's a solution using multiprocessing based on #hpaulj's suggestion of converting to lil_matrix, and iterating over rows
from multiprocessing import Pool
def _top_k(args):
"""
Helper function to process a single row of top_k
"""
data, row = args
data, row = zip(*sorted(zip(data, row), reverse=True)[:k])
return data, row
def top_k(m, k):
"""
Keep only the top k elements of each row in a csr_matrix
"""
ml = m.tolil()
with Pool() as p:
ms = p.map(_top_k, zip(ml.data, ml.rows))
ml.data, ml.rows = zip(*ms)
return ml.tocsr()
One would require to iterate over the rows and get the top indices for each row separately. But this loop can be jited(and parallelized) to get extremely fast function.
#nb.njit(cache=True)
def row_topk_csr(data, indices, indptr, K):
m = indptr.shape[0] - 1
max_indices = np.zeros((m, K), dtype=indices.dtype)
max_values = np.zeros((m, K), dtype=data.dtype)
for i in nb.prange(m):
top_inds = np.argsort(data[indptr[i] : indptr[i + 1]])[::-1][:K]
max_indices[i] = indices[indptr[i] : indptr[i + 1]][top_inds]
max_values[i] = data[indptr[i] : indptr[i + 1]][top_inds]
return max_indices, max_values
Call it like this:
top_pred_indices, _ = row_topk_csr(csr_mat.data, csr_mat.indices, csr_mat.indptr, K)
I need to frequently perform this operation, and this function is fast enough for me, executes in <1s on 1mil x 400k sparse matrix.
HTH.
I need to sort a VERY large genomic dataset using numpy. I have an array of 2.6 billion floats, dimensions = (868940742, 3) which takes up about 20GB of memory on my machine once loaded and just sitting there. I have an early 2015 13' MacBook Pro with 16GB of RAM, 500GB solid state HD and an 3.1 GHz intel i7 processor. Just loading the array overflows to virtual memory but not to the point where my machine suffers or I have to stop everything else I am doing.
I build this VERY large array step by step from 22 smaller (N, 2) subarrays.
Function FUN_1 generates 2 new (N, 1) arrays using each of the 22 subarrays which I call sub_arr.
The first output of FUN_1 is generated by interpolating values from sub_arr[:,0] on array b = array([X, F(X)]) and the second output is generated by placing sub_arr[:, 0] into bins using array r = array([X, BIN(X)]). I call these outputs b_arr and rate_arr, respectively. The function returns a 3-tuple of (N, 1) arrays:
import numpy as np
def FUN_1(sub_arr):
"""interpolate b values and rates based on position in sub_arr"""
b = np.load(bfile)
r = np.load(rfile)
b_arr = np.interp(sub_arr[:,0], b[:,0], b[:,1])
rate_arr = np.searchsorted(r[:,0], sub_arr[:,0]) # HUGE efficiency gain over np.digitize...
return r[rate_r, 1], b_arr, sub_arr[:,1]
I call the function 22 times in a for-loop and fill a pre-allocated array of zeros full_arr = numpy.zeros([868940742, 3]) with the values:
full_arr[:,0], full_arr[:,1], full_arr[:,2] = FUN_1
In terms of saving memory at this step, I think this is the best I can do, but I'm open to suggestions. Either way, I don't run into problems up through this point and it only takes about 2 minutes.
Here is the sorting routine (there are two consecutive sorts)
for idx in range(2):
sort_idx = numpy.argsort(full_arr[:,idx])
full_arr = full_arr[sort_idx]
# ...
# <additional processing, return small (1000, 3) array of stats>
Now this sort had been working, albeit slowly (takes about 10 minutes). However, I recently started using a larger, more fine resolution table of [X, F(X)] values for the interpolation step above in FUN_1 that returns b_arr and now the SORT really slows down, although everything else remains the same.
Interestingly, I am not even sorting on the interpolated values at the step where the sort is now lagging. Here are some snippets of the different interpolation files - the smaller one is about 30% smaller in each case and far more uniform in terms of values in the second column; the slower one has a higher resolution and many more unique values, so the results of interpolation are likely more unique, but I'm not sure if this should have any kind of effect...?
bigger, slower file:
17399307 99.4
17493652 98.8
17570460 98.2
17575180 97.6
17577127 97
17578255 96.4
17580576 95.8
17583028 95.2
17583699 94.6
17584172 94
smaller, more uniform regular file:
1 24
1001 24
2001 24
3001 24
4001 24
5001 24
6001 24
7001 24
I'm not sure what could be causing this issue and I would be interested in any suggestions or just general input about sorting in this type of memory limiting case!
At the moment each call to np.argsort is generating a (868940742, 1) array of int64 indices, which will take up ~7 GB just by itself. Additionally, when you use these indices to sort the columns of full_arr you are generating another (868940742, 1) array of floats, since fancy indexing always returns a copy rather than a view.
One fairly obvious improvement would be to sort full_arr in place using its .sort() method. Unfortunately, .sort() does not allow you to directly specify a row or column to sort by. However, you can specify a field to sort by for a structured array. You can therefore force an inplace sort over one of the three columns by getting a view onto your array as a structured array with three float fields, then sorting by one of these fields:
full_arr.view('f8, f8, f8').sort(order=['f0'], axis=0)
In this case I'm sorting full_arr in place by the 0th field, which corresponds to the first column. Note that I've assumed that there are three float64 columns ('f8') - you should change this accordingly if your dtype is different. This also requires that your array is contiguous and in row-major format, i.e. full_arr.flags.C_CONTIGUOUS == True.
Credit for this method should go to Joe Kington for his answer here.
Although it requires less memory, sorting a structured array by field is unfortunately much slower compared with using np.argsort to generate an index array, as you mentioned in the comments below (see this previous question). If you use np.argsort to obtain a set of indices to sort by, you might see a modest performance gain by using np.take rather than direct indexing to get the sorted array:
%%timeit -n 1 -r 100 x = np.random.randn(10000, 2); idx = x[:, 0].argsort()
x[idx]
# 1 loops, best of 100: 148 µs per loop
%%timeit -n 1 -r 100 x = np.random.randn(10000, 2); idx = x[:, 0].argsort()
np.take(x, idx, axis=0)
# 1 loops, best of 100: 42.9 µs per loop
However I wouldn't expect to see any difference in terms of memory usage, since both methods will generate a copy.
Regarding your question about why sorting the second array is faster - yes, you should expect any reasonable sorting algorithm to be faster when there are fewer unique values in the array because on average there's less work for it to do. Suppose I have a random sequence of digits between 1 and 10:
5 1 4 8 10 2 6 9 7 3
There are 10! = 3628800 possible ways to arrange these digits, but only one in which they are in ascending order. Now suppose there are just 5 unique digits:
4 4 3 2 3 1 2 5 1 5
Now there are 2⁵ = 32 ways to arrange these digits in ascending order, since I could swap any pair of identical digits in the sorted vector without breaking the ordering.
By default, np.ndarray.sort() uses Quicksort. The qsort variant of this algorithm works by recursively selecting a 'pivot' element in the array, then reordering the array such that all the elements less than the pivot value are placed before it, and all of the elements greater than the pivot value are placed after it. Values that are equal to the pivot are already sorted. Having fewer unique values means that, on average, more values will be equal to the pivot value on any given sweep, and therefore fewer sweeps are needed to fully sort the array.
For example:
%%timeit -n 1 -r 100 x = np.random.random_integers(0, 10, 100000)
x.sort()
# 1 loops, best of 100: 2.3 ms per loop
%%timeit -n 1 -r 100 x = np.random.random_integers(0, 1000, 100000)
x.sort()
# 1 loops, best of 100: 4.62 ms per loop
In this example the dtypes of the two arrays are the same. If your smaller array has a smaller item size compared with the larger array then the cost of copying it due to the fancy indexing will also be smaller.
EDIT: In case anyone new to programming and numpy comes across this post, I want to point out the importance of considering the np.dtype that you are using. In my case, I was actually able to get away with using half-precision floating point, i.e. np.float16, which reduced a 20GB object in memory to 5GB and made sorting much more manageable. The default used by numpy is np.float64, which is a lot of precision that you may not need. Check out the doc here, which describes the capacity of the different data types. Thanks to #ali_m for pointing this out in the comments.
I did a bad job explaining this question but I have discovered some helpful workarounds that I think would be useful to share for anyone who needs to sort a truly massive numpy array.
I am building a very large numpy array from 22 "sub-arrays" of human genome data containing the elements [position, value]. Ultimately, the final array must be numerically sorted "in place" based on the values in a particular column and without shuffling the values within rows.
The sub-array dimensions follow the form:
arr1.shape = (N1, 2)
...
arr22.shape = (N22, 2)
sum([N1..N2]) = 868940742 i.e. there are close to 1BN positions to sort.
First I process the 22 sub-arrays with the function process_sub_arrs, which returns a 3-tuple of 1D arrays the same length as the input. I stack the 1D arrays into a new (N, 3) array and insert them into an np.zeros array initialized for the full dataset:
full_arr = np.zeros([868940742, 3])
i, j = 0, 0
for arr in list(arr1..arr22):
# indices (i, j) incremented at each loop based on sub-array size
j += len(arr)
full_arr[i:j, :] = np.column_stack( process_sub_arrs(arr) )
i = j
return full_arr
EDIT: Since I realized my dataset could be represented with half-precision floats, I now initialize full_arr as follows: full_arr = np.zeros([868940742, 3], dtype=np.float16), which is only 1/4 the size and much easier to sort.
Result is a massive 20GB array:
full_arr.nbytes = 20854577808
As #ali_m pointed out in his detailed post, my earlier routine was inefficient:
sort_idx = np.argsort(full_arr[:,idx])
full_arr = full_arr[sort_idx]
the array sort_idx, which is 33% the size of full_arr, hangs around and wastes memory after sorting full_arr. This sort supposedly generates a copy of full_arr due to "fancy" indexing, potentially pushing memory use to 233% of what is already used to hold the massive array! This is the slow step, lasting about ten minutes and relying heavily on virtual memory.
I'm not sure the "fancy" sort makes a persistent copy however. Watching the memory usage on my machine, it seems that full_arr = full_arr[sort_idx] deletes the reference to the unsorted original, because after about 1 second all that is left is the memory used by the sorted array and the index, even if there is a transient copy.
A more compact usage of argsort() to save memory is this one:
full_arr = full_arr[full_arr[:,idx].argsort()]
This still causes a spike at the time of the assignment, where both a transient index array and a transient copy are made, but the memory is almost instantly freed again.
#ali_m pointed out a nice trick (credited to Joe Kington) for generating a de facto structured array with a view on full_arr. The benefit is that these may be sorted "in place", maintaining stable row order:
full_arr.view('f8, f8, f8').sort(order=['f0'], axis=0)
Views work great for performing mathematical array operations, but for sorting it is far too inefficient for even a single sub-array from my dataset. In general, structured arrays just don't seem to scale very well even though they have really useful properties. If anyone has any idea why this is I would be interested to know.
One good option to minimize memory consumption and improve performance with very large arrays is to build a pipeline of small, simple functions. Functions clear local variables once they have completed so if intermediate data structures are building up and sapping memory this can be a good solution.
This a sketch of the pipeline I've used to speed up the massive array sort:
def process_sub_arrs(arr):
"""process a sub-array and return a 3-tuple of 1D values arrays"""
return values1, values2, values3
def build_arr():
"""build the initial array by joining processed sub-arrays"""
full_arr = np.zeros([868940742, 3])
i, j = 0, 0
for arr in list(arr1..arr22):
# indices (i, j) incremented at each loop based on sub-array size
j += len(arr)
full_arr[i:j, :] = np.column_stack( process_sub_arrs(arr) )
i = j
return full_arr
def sort_arr():
"""return full_arr and sort_idx"""
full_arr = build_arr()
sort_idx = np.argsort(full_arr[:, index])
return full_arr[sort_idx]
def get_sorted_arr():
"""call through nested functions to return the sorted array"""
sorted_arr = sort_arr()
<process sorted_arr>
return statistics
call stack: get_sorted_arr --> sort_arr --> build_arr --> process_sub_arrs
Once each inner function is completed get_sorted_arr() finally just holds the sorted array and then returns a small array of statistics.
EDIT: It is also worth pointing out here that even if you are able to use a more compact dtype to represent your huge array, you will want to use higher precision for summary calculations. For example, since full_arr.dtype = np.float16, the command np.mean(full_arr[:,idx]) tries to calculate the mean in half-precision floating point, but this quickly overflows when summing over a massive array. Using np.mean(full_arr[:,idx], dtype=np.float64) will prevent the overflow.
I posted this question initially because I was puzzled by the fact that a dataset of identical size suddenly began choking up my system memory, although there was a big difference in the proportion of unique values in the new "slow" set. #ali_m pointed out that, indeed, more uniform data with fewer unique values is easier to sort:
The qsort variant of Quicksort works by recursively selecting a
'pivot' element in the array, then reordering the array such that all
the elements less than the pivot value are placed before it, and all
of the elements greater than the pivot value are placed after it.
Values that are equal to the pivot are already sorted, so intuitively,
the fewer unique values there are in the array, the smaller the number
of swaps there are that need to be made.
On that note, the final change I ended up making to attempt to resolve this issue was to round the newer dataset in advance, since there was an unnecessarily high level of decimal precision leftover from an interpolation step. This ultimately had an even bigger effect than the other memory saving steps, showing that the sort algorithm itself was the limiting factor in this case.
Look forward to other comments or suggestions anyone might have on this topic, and I almost certainly misspoke about some technical issues so I would be glad to hear back :-)
I have two boolean sparse square matrices of c. 80,000 x 80,000 generated from 12BM of data (and am likely to have orders of magnitude larger matrices when I use GBs of data).
I want to multiply them (which produces a triangular matrix - however I dont get this since I don't limit the dot product to yield a triangular matrix).
I am wondering what the best way of multiplying them is (memory-wise and speed-wise) - I am going to do the computation on a m2.4xlarge AWS instance which has >60GB of RAM. I would prefer to keep the calc in RAM for speed reasons.
I appreciate that SciPy has sparse matrices and so does h5py, but have no experience in either.
Whats the best option to go for?
Thanks in advance
UPDATE: sparsity of the boolean matrices is <0.6%
If your matrices are relatively empty it might be worthwhile encoding them as a data structure of the non-False values. Say a list of tuples describing the location of the non-False values. Or a dictionary with the tuples as the keys.
If you use e.g. a list of tuples you could use a list comprehension to find the items in the second list that can be multiplied with an element from the first list.
a = [(0,0), (3,7), (5,2)] # et cetera
b = ... # idem
for r, c in a:
res = [(r, k) for j, k in b if k == j]
-- EDITED TO SATISFY BELOW COMMENT / DOWNVOTER --
You're asking how to multiply matrices fast and easy.
SOLUTION 1: This is a solved problem: use numpy. All these operations are easy in numpy, and since they are implemented in C, are rather blazingly fast.
http://www.numpy.org/
http://www.scipy.org
also see:
Very large matrices using Python and NumPy
http://docs.scipy.org/doc/scipy/reference/sparse.html
SciPy and Numpy have sparse matrices and matrix multiplication. It doesn't use much memory since (at least if I wrote it in C) it probably uses linked lists, and thus will only use the memory required for the sum of the datapoints, plus some overhead. And, it will almost certainly be blazingly fast compared to pure python solution.
SOLUTION 2
Another answer here suggests storing values as tuples of (x, y), presuming value is False unless it exists, then it's true. Alternate to this is a numeric matrix with (x, y, value) tuples.
REGARDLESS: Multiplying these would be Nasty time-wise: find element one, decide which other array element to multiply by, then search the entire dataset for that specific tuple, and if it exists, multiply and insert the result into the result matrix.
SOLUTION 3 ( PREFERRED vs. Solution 2, IMHO )
I would prefer this because it's simpler / faster.
Represent your sparse matrix with a set of dictionaries. Matrix one is a dict with the element at (x, y) and value v being (with x1,y1, x2,y2, etc.):
matrixDictOne = { 'x1:y1' : v1, 'x2:y2': v2, ... }
matrixDictTwo = { 'x1:y1' : v1, 'x2:y2': v2, ... }
Since a Python dict lookup is O(1) (okay, not really, probably closer to log(n)), it's fast. This does not require searching the entire second matrix's data for element presence before multiplication. So, it's fast. It's easy to write the multiply and easy to understand the representations.
SOLUTION 4 (if you are a glutton for punishment)
Code this solution by using a memory-mapped file of the required size. Initialize a file with null values of the required size. Compute the offsets yourself and write to the appropriate locations in the file as you do the multiplication. Linux has a VMM which will page in and out for you with little overhead or work on your part. This is a solution for very, very large matrices that are NOT SPARSE and thus won't fit in memory.
Note this solves the complaint of the below complainer that it won't fit in memory. However, the OP did say sparse, which implies very few actual datapoints spread out in giant arrays, and Numpy / SciPy handle this natively and thus nicely (lots of people at Fermilab use Numpy / SciPy regularly, I'm confident the sparse matrix code is well tested).