Memory usage increases when building a large NumPy array - python

In the following example, I initialize a 500x2000x2000 three-dimensional NumPy array named a. At each iteration, a random two-dimensional array r is inserted into array a. This example represents a larger piece of code where the r array would be created from various calculations during each iteration of the for-loop. Consequently, each slice in the z dimension of the a array is calculated at each iteration.
# ex1_basic.py
import numpy as np
import time
def main():
tic = time.perf_counter()
z = 500 # depth
x = 2000 # rows
y = 2000 # columns
a = np.zeros((z, x, y))
for i in range(z):
r = np.random.rand(x, y)
a[i] = r
toc = time.perf_counter()
print('elapsed =', round(toc - tic, 2), 'sec')
if __name__ == '__main__':
main()
This example's memory is profiled using the memory-profiler package. The steps for running the memory-profiler for this example are:
# Run the memory profiler
$ mprof run ex1_basic.py
# Plot the memory profiler results
$ mprof plot
The memory usage is plotted below. The memory usage increases over time as the values are added to the array.
I profiled another example where the data type for the array is defined as np.float32. See below for the example code and memory usage plot. This decreased the overall memory use but the memory still increases with each iteration.
import numpy as np
import time
def main():
rng = np.random.default_rng()
tic = time.perf_counter()
z = 500 # depth
x = 2000 # rows
y = 2000 # columns
a = np.zeros((z, x, y), dtype=np.float32)
for i in range(z):
r = rng.standard_normal((x, y), dtype=np.float32)
a[i] = r
toc = time.perf_counter()
print('elapsed =', round(toc - tic, 2), 'sec')
if __name__ == '__main__':
main()
Since I initialized array a using np.zeros, I would expect the memory usage to remain constant based on the block of memory initialized for that array. But it appears that memory usage increases as values are inserted into array a.
So I have two questions related to these examples:
Why does the memory usage increase with time?
How do I create and store array a on disk and only have slice a[i] and the r array in memory at each iteration? Basically, how would I run these examples if the a array did not fit in memory (RAM)?
Update
I ran an example using numpy.memmap but there is no improvement in memory usage. It seems like memmap is still keeping the entire array in memory.
import numpy as np
import time
def main():
rng = np.random.default_rng()
tic = time.perf_counter()
z = 500
x = 2000
y = 2000
a = np.memmap('file.dat', dtype=np.float32, mode='w+', shape=(z, x, y))
for i in range(z):
r = rng.standard_normal((x, y), dtype=np.float32)
a[i] = r
toc = time.perf_counter()
print('elapsed =', round(toc - tic, 2), 'sec')
if __name__ == '__main__':
main()

Using the h5py package, I can create an hdf5 file that contains a dataset that represents the a array. The dset variable is similar to the a variable discussed in the question. This allows the array to reside on disk, not in memory. The generated hdf5 file is 8 GB on disk which is the size of the array containing np.float32 values. The elapsed time for this approach is similar to the examples discussed in the question; therefore, writing to the hdf5 file seems to have a negligible performance impact.
import numpy as np
import h5py
import time
def main():
rng = np.random.default_rng()
tic = time.perf_counter()
z = 500 # depth
x = 2000 # rows
y = 2000 # columns
f = h5py.File('file.hdf5', 'w')
dset = f.create_dataset('data', shape=(z, x, y), dtype=np.float32)
for i in range(z):
r = rng.standard_normal((x, y), dtype=np.float32)
dset[i, :, :] = r
toc = time.perf_counter()
print('elapsed time =', round(toc - tic, 2), 'sec')
s = np.float32().nbytes * (z * x * y) / 1e9 # where 1 GB = 1000 MB
print('calculated storage =', s, 'GB')
if __name__ == '__main__':
main()
Output from running this example on a MacBook Pro with 2.6 GHz 6-Core Intel Core i7 and 32 GB of RAM:
elapsed time = 22.97 sec
calculated storage = 8.0 GB
Running the memory profiler for this example gives the plot shown below. The peak memory usage is about 100 MiB which is drastically lower than the examples demonstrated in the question.

Related

Mulpiprocessing.Pool runs slow

I'm trying to get advantages of multi-processing in python, so did some tests and found multi-processing code runs much slower than plain one. What I do wrong???
Here is the test script:
import numpy as np
from datetime import datetime
from multiprocessing import Pool
def some_func(argv):
x = argv[0]
y = argv[1]
return np.sum(x * y)
def other_func(argv):
x = argv[0]
y = argv[1]
f1 = np.fft.rfft(x)
f2 = np.fft.rfft(y)
CC = np.fft.irfft(f1 * np.conj(f2))
return CC
N = 20000
X = np.random.randint(0, 10, size=(N, N))
Y = np.random.randint(0, 10, size=(N, N))
output_check = np.zeros(N)
D1 = datetime.now()
for k in range(len(X)):
output_check[k] = np.max(some_func((X[k], Y[k])))
print('Plain: ', datetime.now()-D1)
output = np.zeros(N)
D1 = datetime.now()
with Pool(10) as pool: # CPUs
for ind, res in enumerate(pool.imap(some_func, zip(X, Y), chunksize=1)):
output[ind] = np.max(res)
pool.close()
pool.join()
print('Pool: ', datetime.now()-D1)
Output:
Plain: 0:00:00.904062
Pool: 0:00:15.386251
Why so big difference? What consumes the time???
Have 80 CPUs available, tried different pool size and chunksize...
The actual function is more complex (like other_func), with it I get almost the same time for plain and parallel code, but still no speed-up :(
The input is a BIG 3D numpy array, and I need a pairwise convolution of its elements

Fast way to generate large-scale random ndarray

I want to generate a random matrix of shape (1e7, 800). But I find numpy.random.rand() becomes very slow at this scale. Is there a quicker way?
A simple way to do that is to write a multi-threaded implementation using Numba:
import numba as nb
import random
#nb.njit('float64[:,:](int_, int_)', parallel=True)
def genRandom(n, m):
res = np.empty((n, m))
# Parallel loop
for i in nb.prange(n):
for j in range(m):
res[i, j] = np.random.rand()
return res
This is 6.4 times faster than np.random.rand() on my 6-core machine.
Note that using 32-bit floats may help to speed up a bit the computation although the precision will be lower.
Numba is a good option, another option that might work well is dask.array, which will create lazy blocks of numpy arrays and perform parallel computations on blocks. On my machine I get a factor of 2 improvement in speed (for 1e6 x 1e3 matrix since I don't have enough memory on my machine).
rows = 10**6
cols = 10**3
import dask.array as da
x = da.random.random(size=(rows, cols)).compute() # takes about 5 seconds
# import numpy as np
# x = np.random.rand(rows, cols) # takes about 10 seconds
Note that .compute at the end is only to bring the computed array into memory, however in general you can continue to exploit the parallel computations with dask to get much faster calculations (that can also scale beyond a single machine), see docs.
An attempt to find an answer from answers given till now:
I just wrote a script which is compiled from already given (by SultanOrazbayev and Jérôme Richard) answers and contains 3 functions for each numba, dask and numpy approach and measure the time spent for n number of different sized arrays.
The code
import dask.array as da
import matplotlib.pyplot as plt
import numba as nb
import timeit
import numpy as np
#nb.njit('float64[:,:](int_, int_)', parallel=True)
def nmb(n, m):
res = np.empty((n, m))
# Parallel loop
for i in nb.prange(n):
for j in range(m):
res[i, j] = np.random.rand()
return res
def nmp(n, m):
return np.random.random((n, m))
def dask(n, m):
return da.random.random(size=(n, m)).compute()
if __name__ == '__main__':
data = []
for i in range(1, 16):
dmm = 2 ** i
s_nmb = timeit.default_timer()
nmb(dmm, dmm)
e_nmb = timeit.default_timer()
s_nmp = timeit.default_timer()
nmp(dmm, dmm)
e_nmp = timeit.default_timer()
s_dask = timeit.default_timer()
dask(dmm, dmm)
e_dask = timeit.default_timer()
data.append([
dmm,
e_nmb - s_nmb,
e_nmp - s_nmp,
e_dask - s_dask
])
data = np.array(data)
plt.plot(data[:, 0], data[:, 1], "-r", label="Numba")
plt.plot(data[:, 0], data[:, 2], "-g", label="Numpy")
plt.plot(data[:, 0], data[:, 3], "-b", label="Dask")
plt.xlabel("Number of Element on each axes")
plt.ylabel("Time spent (s)")
plt.legend()
plt.show()
The result

optimize this numpy operation

I have inherited some code and there is one particular operation that takes an inordinate amount of time.
The operation is defined as:
cutoff = 0.2
# X has shape (76187, 247, 20)
X_flat = X.reshape((X.shape[0], X.shape[1] * X.shape[2]))
weightfun = lambda x: 1.0 / np.sum(np.dot(X_flat, x) / np.dot(x, x) > 1 - cutoff)
# This is expensive...
N_list = np.array(list(map(weightfun, X_flat)))
This takes hours to compute on my machine. I am wondering if there is a way to optimize this. The code is computing normalized hamming distances between vector sequences.
weightfun performs two dot product operations for every row of X_flat. The worst one is np.dot(X_flat, x), where the dot product is performed against the whole X_flat matrix. But there's a trick to speed things up. The iterative part in the first dot product can be computed only once with:
X_matmut = X_flat # X_flat.T
Also, I noticed that the second dot product is nothing more than the diagonal of the result of the first one.
The rewritten code looks like this:
cutoff = 0.2
# X has shape (76187, 247, 20)
X_flat = X.reshape((X.shape[0], X.shape[1] * X.shape[2]))
X1 = X_flat # X_flat.T
X2 = X1.diagonal()
N_list = 1.0 / (X1/X2 > 1 - cutoff).sum(axis=0)
Edit
For such a large input, when performing the operation above the memory becomes the new bottleneck as the new matrix won't fit into RAM. So there's also the option of breaking the computation into chunks, as the code below shows.
The code gets a little messy, but at least it didn't try to destroy my PC :-P
import numpy as np
import time
# Sample data
X = np.random.random([76187, 247, 20])
start = time.time()
cutoff = 0.2
X_flat = X.reshape((X.shape[0], X.shape[1] * X.shape[2]))
# Divide data into 20 chuncks
X_parts = np.array_split(X_flat, 20)
# Diagonal will be saved incrementally
diagonal = []
for i in range(len(X_parts)):
part = X_parts[i]
X_parts[i] = part # X_flat.T
diagonal.extend(X_parts[i][range(len(X_parts[i])), range(len(diagonal), len(diagonal)+len(X_parts[i]))])
# Performs the second part of the calculation
diagonal = np.array(diagonal)
X_list = np.zeros(len(diagonal))
for x in X_parts:
X_list += (x/diagonal > 1 - cutoff).sum(axis=0)
X_list = 1.0 / X_list
print('Time to solve: %.2f secs' % (time.time() - start))
I would love to be able to perform all the computation on a single loop and discard the used chunks, but it is obligatory to run over the whole matrix once to retrieve the diagonal. Don't believe it's worth to compute everything twice to save memory.
While I use a decent setup (16 GB of RAM in a i7 intel and SSD drive for storage), the whole processing took me around 15 minutes.

Understanding shared memory use for improvement in Numba

I'm trying to learn more about the use of shared memory to improve performance in some cuda kernels in Numba, for this I was looking at the Matrix multiplication Example in the Numba documentation and tried to implement to see the gain.
This is my test implementation, I'm aware that the example in the documentation has some issues that I followed from Here, so I copied the fixed example code.
from timeit import default_timer as timer
import numba
from numba import cuda, jit, int32, int64, float64, float32
import numpy as np
from numpy import *
#cuda.jit
def matmul(A, B, C):
"""Perform square matrix multiplication of C = A * B
"""
i, j = cuda.grid(2)
if i < C.shape[0] and j < C.shape[1]:
tmp = 0.
for k in range(A.shape[1]):
tmp += A[i, k] * B[k, j]
C[i, j] = tmp
# Controls threads per block and shared memory usage.
# The computation will be done on blocks of TPBxTPB elements.
TPB = 16
#cuda.jit
def fast_matmul(A, B, C):
# Define an array in the shared memory
# The size and type of the arrays must be known at compile time
sA = cuda.shared.array(shape=(TPB, TPB), dtype=float32)
sB = cuda.shared.array(shape=(TPB, TPB), dtype=float32)
x, y = cuda.grid(2)
tx = cuda.threadIdx.x
ty = cuda.threadIdx.y
bpg = cuda.gridDim.x # blocks per grid
# Each thread computes one element in the result matrix.
# The dot product is chunked into dot products of TPB-long vectors.
tmp = 0.
for i in range(bpg):
# Preload data into shared memory
sA[ty, tx] = 0
sB[ty, tx] = 0
if y < A.shape[0] and (tx+i*TPB) < A.shape[1]:
sA[ty, tx] = A[y, tx + i * TPB]
if x < B.shape[1] and (ty+i*TPB) < B.shape[0]:
sB[ty, tx] = B[ty + i * TPB, x]
# Wait until all threads finish preloading
cuda.syncthreads()
# Computes partial product on the shared memory
for j in range(TPB):
tmp += sA[ty, j] * sB[j, tx]
# Wait until all threads finish computing
cuda.syncthreads()
if y < C.shape[0] and x < C.shape[1]:
C[y, x] = tmp
size = 1024*4
tpbx,tpby = 16, 16
tpb = (tpbx,tpby)
bpgx, bpgy = int(size/tpbx), int(size/tpby)
bpg = (bpgx, bpgy)
a_in = cuda.to_device(np.arange(size*size, dtype=np.float32).reshape((size, size)))
b_in = cuda.to_device(np.ones(size*size, dtype=np.float32).reshape((size, size)))
c_out1 = cuda.device_array_like(a_in)
c_out2 = cuda.device_array_like(a_in)
s = timer()
cuda.synchronize()
matmul[bpg,tpb](a_in, b_in, c_out1);
cuda.synchronize()
gpu_time = timer() - s
print(gpu_time)
c_host1 = c_out1.copy_to_host()
print(c_host1)
s = timer()
cuda.synchronize()
fast_matmul[bpg,tpb](a_in, b_in, c_out2);
cuda.synchronize()
gpu_time = timer() - s
print(gpu_time)
c_host2 = c_out2.copy_to_host()
print(c_host2)
The time of execution of the above kernels are essentially the same, actually the matmul was making faster for some larger input matrices. I would like to know what I'm missing in order to see the gain as the documentation suggests.
Thanks,
Bruno.
I made a performance mistake in the code I put in that other answer. I've now fixed it. In a nutshell this line:
tmp = 0.
caused numba to create a 64-bit floating point variable tmp. That triggered other arithmetic in the kernel to be promoted from 32-bit floating point to 64-bit floating point. That is inconsistent with the rest of the arithmetic and also inconsistent with the intent of the demonstration in the other answer. This error affects both kernels.
When I change it in both kernels to
tmp = float32(0.)
both kernels get noticeably faster, and on my GTX960 GPU, your test case shows that the shared code runs about 2x faster than the non-shared code (but see below).
The non-shared kernel also has a performance issue related to memory access patterns. Similar to the indices swap in that other answer, for this particular scenario only, we can rectify this problem simply by reversing the assigned indices:
j, i = cuda.grid(2)
in the non-shared kernel. This allows that kernel to perform approximately as well as it can, and with that change the shared kernel runs about 2x faster than the non-shared kernel. Without that additional change to the non-shared kernel, the performance of the non-shared kernel is much worse.

stack images as numpy array faster (than preallocation)?

I often need to stack 2d numpy arrays (tiff images). For that, I first append them in a list and use np.dstack. This seems to be the fastest way to get 3D array stacking images. But, is there a faster/memory-efficient way?
from time import time
import numpy as np
# Create 100 images of the same dimention 256x512 (8-bit).
# In reality, each image comes from a different file
img = np.random.randint(0,255,(256, 512, 100))
t0 = time()
temp = []
for n in range(100):
temp.append(img[:,:,n])
stacked = np.dstack(temp)
#stacked = np.array(temp) # much slower 3.5 s for 100
print time()-t0 # 0.58 s for 100 frames
print stacked.shape
# dstack in each loop is slower
t0 = time()
temp = img[:,:,0]
for n in range(1, 100):
temp = np.dstack((temp, img[:,:,n]))
print time()-t0 # 3.13 s for 100 frames
print temp.shape
# counter-intuitive but preallocation is slightly slower
stacked = np.empty((256, 512, 100))
t0 = time()
for n in range(100):
stacked[:,:,n] = img[:,:,n]
print time()-t0 # 0.651 s for 100 frames
print stacked.shape
# (Edit) As in the accepted answer, re-arranging axis to mainly use
# the first axis to access data improved the speed significantly.
img = np.random.randint(0,255,(100, 256, 512))
stacked = np.empty((100, 256, 512))
t0 = time()
for n in range(100):
stacked[n,:,:] = img[n,:,:]
print time()-t0 # 0.08 s for 100 frames
print stacked.shape
After some joint effort with otterb, we concluded that preallocating of the array is the way to go. Apparently the performance killing bottleneck was the array layout with the image number (n) being the fastest changing index. If we make n the first index of the array (which will default to the "C" ordering: first index changest slowest, last index changes fastest) we get the best performance:
from time import time
import numpy as np
# Create 100 images of the same dimention 256x512 (8-bit).
# In reality, each image comes from a different file
img = np.random.randint(0,255,(100, 256, 512))
# counter-intuitive but preallocation is slightly slower
stacked = np.empty((100, 256, 512))
t0 = time()
for n in range(100):
stacked[n] = img[n]
print time()-t0
print stacked.shape

Categories

Resources