I'm curious if there is a better way of doing a numpy ravel+reshape.
I load up a large stack of large images and get an array of shape (num-rasters, h, w) where num-rasters is the number of images and h/w are height/width of an image(which are all the same size).
I wish to convert the array into a shape (h*w, num-rasters)
here is the way I do it now:
res = my_function(some_variable) #(num-rasters, h, w)
res = res.ravel(order='F').reshape((res.shape[1] * res.shape[2], res.shape[0])) #(h*w, num-rasters)
It works fine but my 'res' variable(the stack of images) is several Gigs in size and even with a ton of ram (32Gigs), the operation takes it all.
I'm curious if any pythonistas or numpy pros have any suggestions.
thanks!
############### post question edit/follow-up
first, the reshaping in place ended up being waaaaay faster than a .reshap() call...which would presumably return a copy with all the associated memory stuff. I should have known better with that.
shortly after I posted I discovered "swapaxes" http://docs.scipy.org/doc/numpy/reference/generated/numpy.swapaxes.html so I made a version with that too:
res2 = res.swapaxes(0, 2).reshape((res.shape[1] * res.shape[2], res.shape[0]))
took 9.2 seconds
it was only a wee bit faster than my original(9.3). But with only one discernible memory peak in my process...but still a big and slow peak.
as magic suggested:
res.shape = (res.shape[0], res.shape[1]*res.shape[2])
res_T = res.T
took basically no time (2.4e-5 seconds) with no memory spike.
and throwing a copy:
res.shape = (res.shape[0], res.shape[1]*res.shape[2])
res_T = res.T.copy()
makes the operation take 0.85 seconds with a similar (but brief) memory spike (for the copy).
the take-home for me is that 'swapaxes' does the same thing as a transpose but you can swap any axes you want, whereas transpose has its fixed way of flipping. it's also nice to see how a transpose behaves in 3-d...that is the main point for me...not needing to ravel. also, the transpose is an in-place view.
You can change the array's shape parameter, leading to in-place shape change. It's a bit tricky to tell which dimensions go where, but something along those lines should work:
res.shape = (res.shape[0], res.shape[1]*res.shape[2]) ## converts to num_rasters, h*w
Transposing this would give you a view (so would sort-of be in place), so then you can do
res_T = res.T
and this should lead to no memory copying, to my knowledge.
Related
My goal is to compute a derivative of a moving window of a multidimensional dataset along a given dimension, where the dataset is stored as Xarray DataArray or DataSet.
In the simplest case, given a 2D array I would like to compute a moving difference across multiple entries in one dimension, e.g.:
data = np.kron(np.linspace(0,1,10), np.linspace(1,4,6) ).reshape(10,6)
T=3
reducedArray = np.zeros_like(data)
for i in range(data.shape[1]):
if i < T:
reducedArray[:,i] = data[:,i] - data[:,0]
else:
reducedArray[:,i] = data[:,i] - data[:,i-T]
where the if i <T condition ensures that input and output contain proper values (i.e., no nans) and are of identical shape.
Xarray's diff aims to perform a finite-difference approximation of a given derivative order using nearest-neighbours, so it is not suitable here, hence the question:
Is it possible to perform this operation using Xarray functions only?
The rolling weighted average example appears to be something similar, but still too distinct due to the usage of NumPy routines. I've been thinking that something along the lines of the following should work:
xr2DDataArray = xr.DataArray(
data,
dims=('x','y'),
coords={'x':np.linspace(0,1,10), 'y':np.linspace(1,4,6)}
)
r = xr2DDataArray.rolling(x=T,min_periods=2)
r.reduce( redFn )
I am struggling with the definition of redFn here ,though.
Caveat The actual dataset to which the operation is to be applied will have a size of ~10GiB, so a solution that does not blow up the memory requirements will be highly appreciated!
Update/Solution
Using Xarray rolling
After sleeping on it and a bit more fiddling the post linked above actually contains a solution. To obtain a finite difference we just have to define the weights to be $\pm 1$ at the ends and $0$ else:
def fdMovingWindow(data, **kwargs):
T = kwargs['T'];
del kwargs['T'];
weights = np.zeros(T)
weights[0] = -1
weights[-1] = 1
axis = kwargs['axis']
if data.shape[axis] == T:
return np.sum(data * weights, **kwargs)
else:
return 0
r.reduce(fdMovingWindow, T=4)
alternatively, using construct and a dot product:
weights = np.zeros(T)
weights[0] = -1
weights[-1] = 1
xrWeights = xr.DataArray(weights, dims=['window'])
xr2DDataArray.rolling(y=T,min_periods=1).construct('window').dot(xrWeights)
This carries a massive caveat: The procedure essentially creates a list arrays representing the moving window. This is fine for a modest 2D / 3D array, but for a 4D array that takes up ~10 GiB in memory this will lead to an OOM death!
Simplicistic - memory efficient
A less memory-intensive way is to copy the array and work in a way similar to NumPy's arrays:
xrDiffArray = xr2DDataArray.copy()
dy = xr2DDataArray.y.values[1] - xr2DDataArray.y.values[0] #equidistant sampling
for src in xr2DDataArray:
if src.y.values < xr2DDataArray.y.values[0] + T*dy:
xrDiffArray.loc[dict(y = src.y.values)] = src.values - xr2DDataArray.values[0]
else:
xrDiffArray.loc[dict(y = src.y.values)] = src.values - xr2DDataArray.sel(y = src.y.values - dy*T).values
This will produce the intended result without dimensional errors, but it requires a copy of the dataset.
I was hoping to utilise Xarray to prevent a copy and instead just chain operations that are then evaluated if and when values are actually requested.
A suggestion as to how to accomplish this will still be welcomed!
I have never used xarray, so maybe I am mistaken, but I think you can get the result you want avoiding using loops and conditionals. This is at least twice faster than your example for numpy arrays:
data = np.kron(np.linspace(0,1,10), np.linspace(1,4,6)).reshape(10,6)
reducedArray = np.empty_like(data)
reducedArray[:, T:] = data[:, T:] - data[:, :-T]
reducedArray[:, :T] = data[:, :T] - data[:, 0, np.newaxis]
I imagine the improvement will be higher when using DataArrays.
It does not use xarray functions but neither depends on numpy functions. I am confident that translating this to xarray will be straightforward, I know that it works if there are no coords, but once you include them, you get an error because of the coords mismatch (coords of data[:, T:] and of data[:, :-T] are different). Sadly, I can't do better now.
I have run into an Out of Memory problem while running a python script. The trace reads -
490426.070081] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice,task=python3,pid=18456,uid=1003
[490426.070085] Out of memory: Killed process 18456 (python3) total-vm:82439932kB, anon-rss:63127200kB, file-rss:4kB, shmem-rss:0kB
[490427.453131] oom_reaper: reaped process 18456 (python3), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
I strongly suspect it is because of the concatenations I do in the script when the smaller test sample script was applied a larger dataset of 105,000 entries.
So a bit of overview of how my script looks. I have about 105,000 rows of timestamps and other data.
dataset -
2020-05-24T10:44:37.923792|[0.0, 0.0, -0.246047720313072, 0.0]
2020-05-24T10:44:36.669264|[1.0, 1.0, 0.0, 0.0]
2020-05-24T10:44:37.174584|[1.0, 1.0, 0.0, 0.0]
2020-05-24T10:57:53.345618|[0.0, 0.0, 0.0, 0.0]
For each Nth timestamp there are N*3 images. For example - 4 timestamps = 12 images. I would like to concatenate all the 3 images for every timestamp as one in axis = 2. Result dimension would be 70x320x9. Then go through all the rows in such a way and get an end tensor of dimension Nx70x320x9
I solved that with help from here -- Python - Numpy 3D array - concatenate issues using dictionary for each timestamp and concatenating later.
collected_images[timepoint].append(image)
.
.
.
output = []
for key, val in collected_iamges.items():
temp = np.concatenate(val, axis=2)
output.append(temp[np.newaxis, ...])
output = np.concatenate(output, axis=0)
However,as you would've guessed when applied to 105K timestamps(105K *3 images), the script crashes with OOM.
This is where I seek your help.
I'm looking for ideas to solve this bottleneck. What other strategy can I use to accomplish my requirement.
Is it possible to do some modifications to overcome the kernel OOM behaviour temporarily?
If you know the size of your dataset, you can generate a file-mapped array of a predefined size:
import numpy as np
n = 105000
a = np.memmap('array.dat', dtype='int16', mode='w+', shape=(n, 320, 7, 9))
You can use a as a numpy array, but it is stored on disk rather than in memory.
Change the data type from int16 to whatever is suitable for your data (int8, float32, etc.).
You probably don't want to use slices like a[:, i, :, :] because those will be very slow.
I solved the issue!
It took a while to revise my logic. The key change was to empty the list after every iteration and figuring out how to maintain the desired dimension. With a bit of help, I made changes to eliminate the dictionary and doing concatenation twice. Just used a list, appended it and concatenate at each iteration but emptied the 3 images' list for the next iteration. Doing this, saved loading everything in memory.
Here is the sample of that code-
collected_images = []
images_concat = []
collected_images.append(image) #appending each image 3 times.
concate_img = np.concatenate(collected_images, axis=2) #70x320x9
images_concat.append(concate_img) #nx70x320x9
collected_images = []
I'm trying to perform calculations on very large arrays, dimensions 65536 x 65536. Since I read that np.memmap can allow me to perform calculations out of core so that I'm not limited by memory I tried to do this. This method works for small arrays like 8192 x 8192. However, when I try for the larger dimension, I get a bus error(core dumped). What could be causing this issue? And how can I overcome it? Would appreciate any advice. Code is below. I have 2 arrays X and Y, stored in binary format which I load and perform calculations on. Additionally, I have a RAM of 128 GB so there is no issue in these new arrays being allocated.
import numpy as np
X = np.memmap('X.bin',dtype='float64',mode='r',shape=(65536,65536))
Y = np.memmap('Y.bin',dtype='float64',mode='r',shape=(65536,65536))
A = np.fft.rfft2(np.fft.fftshift(X))
B = np.fft.rfft2(Y)
C = np.fft.irfft2(A*B)
alpha_x,alpha_y = np.gradient(C,edge_order=2)
Together, this would be
alpha_x,alpha_y = np.gradient(np.fft.irfft2(np.fft.rfft2(Y)*np.fft.rfft2(np.fft.fftshift(X))),edge_order=2)
As part of my data processing I produce huge non sparse matrices in the order of 100000*100000 cells, which I want to downsample by a factor of 10 to reduce the amount of data. In this case I want to average over blocks of 10*10 pixels, to reduce the size of my matrix from 100000*100000 to 10000*10000.
What is the fastest way to do so using python? It does not matter for me if I need to save my original data to a new dataformat, because I have to do the downsampling of the same dataset multiple times.
Currently I am using numpy.memmap:
import numpy as np
data_1 = 'data_1.dat'
date_2 = 'data_2.dat'
lines = 100000
pixels = 100000
window = 10
new_lines = lines / window
new_pixels = pixels / window
dat_1 = np.memmap(data_1, dtype='float32', mode='r', shape=(lines, pixels))
dat_2 = np.memmap(data_2, dtype='float32', mode='r', shape=(lines, pixels))
dat_in = dat_1 * dat_2
dat_out = dat_in.reshape([new_lines, window, new_pixels, window]).mean(3).mean(1)
But with with large files this method becomes very slow. Likely this has something to do with the binary data of these files, which are ordered by line. Therefore, I think that a data format which stores my data in blocks instead of lines will be faster, but I am not sure what the performance gain will be and whether there are python packages who support this.
I have also thought about downsampling of the data before creating such a huge matrix (not shown here), but my input data is fractured and irregular, so that would become very complex.
Based on this answer, I think this might be a relatively fast method, depending on how much overhead reshape gives you with memmap.
def downSample(a, window):
i, j = a.shape
ir = np.arange(0, i, window)
jr = np.arange(0, j, window)
n = 1./(window**2)
return n * np.add.reduceat(np.add.reduceat(a, ir), jr, axis=1)
Hard to test speed without your dataset.
This avoids an intermediate copy, as the reshape keeps dimensions contiguous
dat_in.reshape((lines/window, window, pixels/window, window)).mean(axis=(1,3))
Is it possible to obtain better performance (both in memory consumption and speed) in this moving-window computation? I have a 1000x1000 numpy array and I take 16x16 windows through the whole array and finally apply some function to each window (in this case, a discrete cosine transform.)
import numpy as np
from scipy.fftpack import dct
from skimage.util import view_as_windows
X = np.arange(1000*1000, dtype=np.float32).reshape(1000,1000)
window_size = 16
windows = view_as_windows(X, (window_size,window_size))
dcts = np.zeros(windows.reshape(-1,window_size, window_size).shape, dtype=np.float32)
for idx, window in enumerate(windows.reshape(-1,window_size, window_size)):
dcts[idx, :, :] = dct(window)
dcts = dcts.reshape(windows.shape)
This code takes too much memory (in the example above, the memory consumption is not so bad - windows uses 1Gb and dcts also needs 1Gb) and is taking 25 seconds to complete. I'm a bit unsure as to what I'm doing wrong because this should be a straightforward calculation (e.g. filtering an image.) Is there a better way to accomplish this?
UPDATE:
I was initially worried that the arrays produced by Kington's solution and my initial approach were very different, but the difference is restricted to the boundaries, so it is unlikely to cause serious issues for most applications. The only remaining problem is that both solutions are very slow. Currently, the first solution takes 1min 10s and the second solution 59 seconds.
UPDATE 2:
I noticed the biggest culprits by far are dct and np.mean. Even generic_filter performs decently (8.6 seconds) using a "cythonized" version of mean with bottleneck:
import bottleneck as bp
def func(window, shape):
window = window.reshape(shape)
#return np.abs(dct(dct(window, axis=1), axis=0)).mean()
return bp.nanmean(dct(window))
result = scipy.ndimage.generic_filter(X, func, (16, 16),
extra_arguments=([16, 16],))
I'm currently reading how to wrap C code using numpy in order to replace scipy.fftpack.dct. If anyone knows how to do it, I would appreciate the help.
Since scipy.fftpack.dct calculates separate transforms along the last axis of the input array, you can replace your loop with:
windows = view_as_windows(X, (window_size,window_size))
dcts = dct(windows)
result1 = dcts.mean(axis=(2,3))
Now only the dcts array requires a lot of memory and windows remains merely a view into X. And because the DCT's are calculated with a single function call it's also much faster. However, because the windows overlap there are lots of repeated calculations. This can be overcome by only calculating the DCT for each sub-row once, followed by a windowed mean:
ws = window_size
row_dcts = dct(view_as_windows(X, (1, ws)))
cs = row_dcts.squeeze().sum(axis=-1).cumsum(axis=0)
result2 = np.vstack((cs[ws-1], cs[ws:]-cs[:-ws])) / ws**2
Though it seems what is gained in effeciency is lost in code clarity... But basically the approach here is to first calculate the DCT's and then take the window average by summing over the 2D window and then dividing by the number of elements in the window. The DCTs are already calculated over rowwise moving windows, so we take a regular sum over those windows. However we need to take a moving window sum over the columns, to arrive at the proper 2D window sums. To do this efficiently we use a cumsum trick, where:
sum(A[p:q]) # q-p == window_size
Is equivalent to:
cs = cumsum(A)
cs[q-1] - cs[p-1]
This avoids having to sum the exact same numbers over and over. Unfortunately it doesn't work for the first window (when p == 0), so for that we have to take only cs[q-1] and stack it together with the other window sums. Finally we divide by the number of elements to arrive at the 2D window average.
If you like to do a 2D DCT than this second approach becomes less interesting, beause you'll eventually need the full 985 x 985 x 16 x 16 array before you can take the mean.
Both approaches above should be equivalent, but it may be a good idea to perform the arithmetic with 64-bit floats:
np.allclose(result1, result2, atol=1e-6)
# False
np.allclose(result1, result2, atol=1e-5)
# True
skimage.util.view_as_windows is using striding tricks to make an array of overlapping "windows" that doesn't use any additional memory.
However, when you make a new array of the shape shape, it will require ~32 times (16 x 16) the memory that your original X array or the windows array used.
Based on your comment, your end result is doing dcts.reshape(windows.shape).mean(axis=2).mean(axis=2) - taking the mean of the dct of each window.
Therefore, it would be more memory-efficient (though similar performance wise) to take the mean inside the loop and not store the huge intermediate array of windows:
import numpy as np
from scipy.fftpack import dct
from skimage.util import view_as_windows
X = np.arange(1000*1000, dtype=np.float32).reshape(1000,1000)
window_size = 16
windows = view_as_windows(X, (window_size, window_size))
dcts = np.zeros(windows.shape[:2], dtype=np.float32).ravel()
for idx, window in enumerate(windows.reshape(-1, window_size, window_size)):
dcts[idx] = dct(window).mean()
dcts = dcts.reshape(windows.shape[:2])
Another option is scipy.ndimage.generic_filter. It won't increase performance much (the bottleneck is the python function call in the inner loop), but you'll have a lot more boundary condition options, and it will be fairly memory efficient:
import numpy as np
from scipy.fftpack import dct
import scipy.ndimage
X = np.arange(1000*1000, dtype=np.float32).reshape(1000,1000)
def func(window, shape):
window = window.reshape(shape)
return dct(window).mean()
result = scipy.ndimage.generic_filter(X, func, (16, 16),
extra_arguments=([16, 16],))