How to slice/index, easily, multi-dimensional arrays in Numpy/Python? - python

I have a numpy array of size (15 x 200 x 3) called rap.
I would like to slice it based on a 2d list such as this one:
fragment = [0 93
7 102
6 43
11 167]
This is basically the list of the first two indices of the original 3d array, which I want to return.
It gives error when I try to do it this way:
rap_sliced = rap[fragment, :]
or
rap_sliced = rap[list(fragment), :]
rap_sliced = rap[fragment]
What am I doing wrong?

Assuming input:
>>> fragment
[[0, 93], [7, 102], [6, 43], [11, 167]]
>>> fragment=np.array(fragment)
this will work:
rap[fragment[:, 0], fragment[:, 1], :]
So
numpy_array[X, Y, Z]
where X, Y, Z could be single value, list (one dimensional), or :
Alternatively for numpy you can do:
numpy_array[boolean_array]
where numpy_array.shape=boolean_array.shape and boolean_array essentially provides you True/False, whether return or not value with given coordinates from numpy_array

Related

What does it mean when an array is used to index an array in python?

I have a snippit of code as follows:
y = np.array(self.unique_y)
random_indices = self.random_generator.integers(0,len(self.unique_y), size=5)
return y[random_indices]
unique_y is simply an 1D array of [0 1 2].
My understanding of integers is that it produces an array, which is confirmed when I printed y and random_indices.
Here, y is [0 1 2] and random_indices is [2 2 1 0 2].
The return is also [2 2 1 0 2]
My two questions:
Can I just do
return np.array(self.random_generator.integers(0,len(self.unique_y), size=5))
What is it doing using random_indices to index y in y[random_indices]? Both y and random_indices are arrays but one is used to index the other.
Make a 1d array:
In [337]: y = np.arange(10)*10
In [338]: y
Out[338]: array([ 0, 10, 20, 30, 40, 50, 60, 70, 80, 90])
and other array with values in the correct range (0 to 10):
In [339]: idx = np.array([2,5,0])
Simply select the corresponding elements from y:
In [340]: y[idx]
Out[340]: array([20, 50, 0])
That works if idx is a list as well.
Things become more complicated when y has more dimensions, and we use arrays to index all dimensions. For that you need to understand broadcasting. But for 1d arrays, the indexing is straightforward.
The list equivalent is:
In [341]: [y[i] for i in idx]
Out[341]: [20, 50, 0]

How to delete rows of numpy array by multiple row indices?

I have two lists of indices (idx[0] and idx[1]), and I should delete the corresponding rows from numpy array y_test.
y_test
12 11 10
1 2 2
3 2 3
4 1 2
13 1 10
idx[0] = [0,2]
idx[1] = [1,3]
I tried to delete the rows as follows (using ~). But it didn't work:
result = y_test[(~idx[0]+~idx[1]+~idx[2])]
Expected result:
result =
13 1 10
Instead of removing elements, just make a new array with the desired ones. This will keep any future indexing from getting jumbled up and maintain the old array.
import numpy as np
y_test = np.asarray([[12, 11, 10], [1, 2, 2], [3, 2, 3], [4, 1, 2], [13, 1, 10]])
idx = [[0, 2], [1, 3]]
# flatten list of lists
idx_flat = [i for j in idx for i in j]
# assign values that are NOT in your idx list to a new array
result = [row for num, row in enumerate(y_test) if num not in idx_flat]
# cast this however you want it, right now 'result' is a list of np.arrays
print result
[array([13, 1, 10])]
For an understanding of the flatten step using list comprehensions check this out
You can use numpy.delete which deletes the subarrays along the axis.
np.delete(y_test, idx, axis=0)
Make sure that idx.dtype is an integer type and use numpy.astype if not.
Your approach did not work because idx is not a boolean index array but holds the indices. So ~ which is binary negation will produce ~[0, 2] = [-1, -3] (where both should be numpy arrays).
I would definitely recommend reading up on the difference between index arrays and boolean index arrays. For boolean index arrays I would suggest using numpy.logical_not and numpy.logical_or.
+ concatenates Python lists but is the standard plus for numpy arrays.
Since you are using NumPy I'd suggest masking in this way.
Setup:
import numpy as np
y_test = np.array([[12,11,10],
[1,2,2],
[3,2,3],
[4,1,2],
[13,1,10]])
idx = np.array([[0,2], [1,3]])
Generate the mask:
Generate a mask of ones then set to zero elements at index in idx:
mask = np.ones(len(y_test), dtype = int).reshape(5,1)
mask[idx.flatten()] = 0
Finally apply the mask:
y_test[~np.all(y_test * mask == 0, axis=1)]
#=> [[13 1 10]]
y_test has not been modified.

Needing to assess smaller 3D arrays in larger 3D array with Numpy

I have to take a random integer 50x50x50 array and determine which contiguous 3x3x3 cube within it has the largest sum.
It seems like a lot of splitting features in Numpy don't work well unless the smaller cubes are evenly divisible into the larger one. Trying to work through the thought process I made a 48x48x48 cube that is just in order from 1 to 110,592. I then was thinking of reshaping it to a 4D array with the following code and assessing which of the arrays had the largest sum? when I enter this code though it splits the array in an order that is not ideal. I want the first array to be the 3x3x3 cube that would have been in the corner of the 48x48x48 cube. Is there a syntax that I can add to make this happen?
import numpy as np
arr1 = np.arange(0,110592)
arr2=np.reshape(arr1, (48,48,48))
arr3 = np.reshape(arr2, (4096, 3,3,3))
arr3
output:
array([[[[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8]],
[[ 9, 10, 11],
[ 12, 13, 14],
[ 15, 16, 17]],
[[ 18, 19, 20],
[ 21, 22, 23],
[ 24, 25, 26]]],
desired output:
array([[[[ 0, 1, 2],
[ 48, 49, 50],
[ 96, 97, 98]],
etc etc
Solution
There's a live version of this solution online you can try for yourself
There's a simple (kind of) solution to your original problem of finding the maximum 3x3x3 subcube in a 50x50x50 cube that's based on changing the input array's strides. This solution is completely vectorized (meaning no looping), and so should get the best possible performance out of Numpy:
import numpy as np
def cubecube(arr, cshape):
strides = (*arr.strides, *arr.strides)
shape = (*np.array(arr.shape) - cshape + 1, *cshape)
return np.lib.stride_tricks.as_strided(arr, shape=shape, strides=strides)
def maxcube(arr, cshape):
cc = cubecube(arr, cshape)
ccsums = cc.sum(axis=tuple(range(-arr.ndim, 0)))
ix = np.unravel_index(np.argmax(ccsums), ccsums.shape)[:arr.ndim]
return ix, cc[ix]
The maxcube function takes an array and the shape of the subcubes, and returns a tuple of (first-index-of-largest-cube, largest-cube). Here's an example of how to use maxcube:
shape = (50, 50, 50)
cshape = (3, 3, 3)
# set up a 50x50x50 array
arr = np.arange(np.prod(shape)).reshape(*shape)
# set one of the subcubes as the largest
arr[37, 26, 11] = 999999
ix, cube = maxcube(arr, cshape)
print('first index of largest cube: {}'.format(ix))
print('largest cube:\n{}'.format(cube))
which outputs:
first index of largest cube: (37, 26, 11)
largest cube:
[[[999999 93812 93813]
[ 93861 93862 93863]
[ 93911 93912 93913]]
[[ 96311 96312 96313]
[ 96361 96362 96363]
[ 96411 96412 96413]]
[[ 98811 98812 98813]
[ 98861 98862 98863]
[ 98911 98912 98913]]]
In depth explanation
A cube of cubes
What you have is a 48x48x48 cube, but what you want is a cube of smaller cubes. One can be converted to the other by altering its strides. For a 48x48x48 array of dtype int64, the stride will originally be set as (48*48*8, 48*8, 8). The first value of each non-overlapping 3x3x3 subcube can be iterated over with a stride of (3*48*48*8, 3*48*8, 3*8). Combine these strides to get the strides of the cube of cubes:
# Set up a 48x48x48 array, like in OP's example
arr = np.arange(48**3).reshape(48,48,48)
shape = (16,16,16,3,3,3)
strides = (3*48*48*8, 3*48*8, 3*8, 48*48*8, 48*8, 8)
# restride into a 16x16x16 array of 3x3x3 cubes
arr2 = np.lib.stride_tricks.as_strided(arr, shape=shape, strides=strides)
arr2 is a view of arr (meaning that they share data, so no copy needs to be made) with a shape of (16,16,16,3,3,3). The ijkth 3x3 cube in arr can be accessed by passing the indices to arr2:
i,j,k = 0,0,0
print(arr2[i,j,k])
Output:
[[[ 0 1 2]
[ 48 49 50]
[ 96 97 98]]
[[2304 2305 2306]
[2352 2353 2354]
[2400 2401 2402]]
[[4608 4609 4610]
[4656 4657 4658]
[4704 4705 4706]]]
You can get the sums of all of the subcubes by just summing across the inner axes:
sumOfSubcubes = arr2.sum(3,4,5)
This will yield a 16x16x16 array in which each value is the sum of a non-overlapping 3x3x3 subcube from your original array. This solves the specific problem about the 48x48x48 array that the OP asked about. Restriding can also be used to find all of the overlapping 3x3x3 cubes, as in the cubecube function above.
Your thought process with the 48x48x48 cube goes in the right direction insofar that there are 48³ different contiguous 3x3x3 cubes within the 50x50x50 array, though I don't understand why you would want to reshape it.
What you could do is add all 27 values of each 3x3x3 cube to a 48x48x48 dimensional array by going through all 27 permutations of adjacent slices and find the maximum over it. The found entry will give you the index tuple coordinate_index of the cube corner that is closest to the origin of your original array.
import numpy as np
np.random.seed(0)
array_shape = np.array((50,50,50), dtype=int)
cube_dim = np.array((3,3,3), dtype=int)
original_array = np.random.randint(array_shape)
reduced_shape = array_shape - cube_dim + 1
sum_array = np.zeros(reduced shape, dtype=int)
for i in range(cube_dim[0]):
for j in range(cube_dim[1]):
for k in range(cube_dim[2]):
sum_array += original_array[
i:-cube_dim[0]+1+i, j:-cube_dim[1]+1+j, k:-cube_dim[2]+1+k
]
flat_index = np.argmax(sum_array)
coordinate_index = np.unravel_index(flat_index, reduced_shape)
This method should be faster than looping over each of the 48³ index combinations to find the desired cube as it uses in place summation but in turn requires more memory. I'm not sure about it, but defining an (48³, 27) array with slices and using np.sum over the second axis could be even faster.
You can easily change the above code to find a cuboid with arbitrary side lengths instead.
This is a solution without many numpy functions, just numpy.sum. First define a squared matrix and then the size of the cube cs you are going to perform the summation within.
Just change cs to adjust the cube size and find other solutions. Following #Divakar suggestion, I have used a 4x4x4 array and I also store the location where the cube is location (just the vertex of the cube's origin)
import numpy as np
np.random.seed(0)
a = np.random.randint(0,9,(4,4,4))
print(a)
cs = 2 # Cube size
my_sum = 0
idx = None
for i in range(a.shape[0]-cs+2):
for j in range(a.shape[1]-cs+2):
for k in range(a.shape[2]-cs+2):
cube_sum = np.sum(a[i:i+cs, j:j+cs, k:k+cs])
print(cube_sum)
if cube_sum > my_sum:
my_sum = cube_sum
idx = (i,j,k)
print(my_sum, idx) # 42 (0, 0, 0)
This 3D array a is
[[[5 0 3 3]
[7 3 5 2]
[4 7 6 8]
[8 1 6 7]]
[[7 8 1 5]
[8 4 3 0]
[3 5 0 2]
[3 8 1 3]]
[[3 3 7 0]
[1 0 4 7]
[3 2 7 2]
[0 0 4 5]]
[[5 6 8 4]
[1 4 8 1]
[1 7 3 6]
[7 2 0 3]]]
And you get my_sum = 42 and idx = (0, 0, 0) for cs = 2. And my_sum = 112 and idx = (1, 0, 0) for cs = 3
Here is a cumsum based fast solution:
import numpy as np
nd = 3
cs = 3
N = 50
# create indices [cs-1:, ...], [:, cs-1:, ...], ...
fromcsm = *zip(*np.where(np.identity(nd, bool), np.s_[cs-1:], np.s_[:])),
# create indices [cs:, ...], [:, cs:, ...], ...
fromcs = *zip(*np.where(np.identity(nd, bool), np.s_[cs:], np.s_[:])),
# create indices [:cs, ...], [:, :cs, ...], ...
tocs = *zip(*np.where(np.identity(nd, bool), np.s_[:cs], np.s_[:])),
# create indices [:-cs, ...], [:, :-cs, ...], ...
tomcs = *zip(*np.where(np.identity(nd, bool), np.s_[:-cs], np.s_[:])),
# create indices [cs-1, ...], [:, cs-1, ...], ...
atcsm = *zip(*np.where(np.identity(nd, bool), cs-1, np.s_[:])),
def windowed_sum(a):
out = a.copy()
for i, (fcsm, fcs, tcs, tmcs, acsm) \
in enumerate(zip(fromcsm, fromcs, tocs, tomcs, atcsm)):
out[fcs] -= out[tmcs]
out[acsm] = out[tcs].sum(axis=i)
out = out[fcsm].cumsum(axis=i)
return out
This returns the sums over all the sub cubes. We can then use argmax and unravel_index to get the offset of the maximum cube. Example:
np.random.seed(0)
a = np.random.randint(0,9,(N,N,N))
s = windowed_sum(a)
idx = np.unravel_index(np.argmax(s,), s.shape)

Numpy array and Matlab Matrix are mismatching [3D]

The following octave code shows a sample 3D matrix using Octave/Matlab
octave:1> A=zeros(3,3,3);
octave:2>
octave:2> A(:,:,1)= [[1 2 3];[4 5 6];[7 8 9]];
octave:3>
octave:3> A(:,:,2)= [[11 22 33];[44 55 66];[77 88 99]];
octave:4>
octave:4> A(:,:,3)= [[111 222 333];[444 555 666];[777 888 999]];
octave:5>
octave:5>
octave:5> A
A =
ans(:,:,1) =
1 2 3
4 5 6
7 8 9
ans(:,:,2) =
11 22 33
44 55 66
77 88 99
ans(:,:,3) =
111 222 333
444 555 666
777 888 999
octave:6> A(1,3,2)
ans = 33
And I need to convert the same matrix using numpy ... unfortunately When I'm trying to access the same index using array in numpy I get different values as shown below!!
import numpy as np
array = np.array([[[1 ,2 ,3],[4 ,5 ,6],[7 ,8 ,9]], [[11 ,22 ,33],[44 ,55 ,66],[77 ,88 ,99]], [[111 ,222 ,333],[444 ,555 ,666],[777 ,888 ,999]]])
>>> array[0,2,1]
8
Also I read the following document that shows the difference between matrix implementation in Matlab and in Python numpy Numpy for Matlab users but I didn't find a sample 3d array and the mapping of it into Matlab and vice versa!
the answer is different for example accessing the element(1,3,2) in Matlab doesn't match the same index using numpy (0,2,1)
Octave/Matlab
octave:6> A(1,3,2)
ans = 33
Python
>>> array[0,2,1]
8
The way your array is constructed in numpy is different than it is in MATLAB.
Where your MATLAB array is (y, x, z), your numpy array is (z, y, x). Your 3d numpy array is a series of 'stacked' 2d arrays, so you're indexing "outside->inside" (for lack of a better term). Here's your array definition expanded so this (hopefully) makes a little more sense:
[[[1, 2, 3],
[4, 5, 6], # Z = 0
[7 ,8 ,9]],
[[11 ,22 ,33],
[44 ,55 ,66], # Z = 1
[77 ,88 ,99]],
[[111 ,222 ,333],
[444 ,555 ,666], # Z = 2
[777 ,888 ,999]]
]
So with:
import numpy as np
A = np.array([[[1 ,2 ,3],[4 ,5 ,6],[7 ,8 ,9]], [[11 ,22 ,33],[44 ,55 ,66],[77 ,88 ,99]], [[111 ,222 ,333],[444 ,555 ,666],[777 ,888 ,999]]])
B = A[1, 0, 2]
B returns 33, as expected.
If you want a less mind-bending way to indexing your array, consider generating it as you did in MATLAB.
MATLAB and Python index differently. To investigate this, lets create a linear array of number 1 to 8 and then reshape the result to be a 2-by-2-by-2 matrix in each language:
MATLAB:
M_flat = 1:8
M = reshape(M_flat, [2,2,2])
which returns
M =
ans(:,:,1) =
1 3
2 4
ans(:,:,2) =
5 7
6 8
Python:
import numpy as np
P_flat = np.array(range(1,9))
P = np.reshape(P, [2,2,2])
which returns
array([[[1, 2],
[3, 4]],
[[5, 6],
[7, 8]]])
The first thing you should notice is that the first two dimensions have switched. This is because MATLAB uses column-major indexing which means we count down the columns first whereas Python use row-major indexing and hence it counts across the rows first.
Now let's try indexing them. So let's try slicing along the different dimensions. In MATLAB, I know to get a slice out of the third dimension I can do
M(:,:,1)
ans =
1 3
2 4
Now let's try the same in Python
P[:,:,0]
array([[1, 3],
[5, 7]])
So that's completely different. To get the MATLAB 'equivalent' we need to go
P[0,:,:]
array([[1, 2],
[3, 4]])
Now this returns the transpose of the MATLAB version which is to be expected due the the row-major vs column-major difference.
So what does this mean for indexing? It looks like Python puts the major index at the end which is the reverse of MALTAB.
Let's say I index as follows in MATLAB
M(1,2,2)
ans =
7
now to get the 7 from Python we should go
P(1,1,0)
which is the MATLAB syntax reversed. Note that is is reversed because we created the Python matrix with a row-major ordering in mind. If you create it as you did in your code you would have to swap the last 2 indices so rather create the matrix correctly in the first place as Ander has suggested in the comments.
I think better than just calling the difference "row major" or "column major" is numpy's way of describing them:
‘C’ means to read / write the elements using C-like index order, with the last axis index changing fastest, back to the first axis index changing slowest. ‘F’ means to read / write the elements using Fortran-like index order, with the first index changing fastest, and the last index changing slowest.
Some gifs to illustrate the difference: The first is row-major (python / c), second is column-major (MATLAB/ Fortran)
I think that the problem is the way you create the matrix in numpy and also the different representation of matlab and numpy, why you don't use the same system in matlab and numpy
>>> A = np.zeros((3,3,3),dtype=int)
>>> A
array([[[0, 0, 0],
[0, 0, 0],
[0, 0, 0]],
[[0, 0, 0],
[0, 0, 0],
[0, 0, 0]],
[[0, 0, 0],
[0, 0, 0],
[0, 0, 0]]])
>>> A[:,:,0] = np.array([[1,2,3],[4,5,6],[7,8,9]])
>>> A[:,:,1] = np.array([[11,22,33],[44,55,66],[77,88,99]])
>>> A[:,:,2] = np.array([[111,222,333],[444,555,666],[777,888,999]])
>>> A
array([[[ 1, 11, 111],
[ 2, 22, 222],
[ 3, 33, 333]],
[[ 4, 44, 444],
[ 5, 55, 555],
[ 6, 66, 666]],
[[ 7, 77, 777],
[ 8, 88, 888],
[ 9, 99, 999]]])
>>> A[0,2,1]
33
I think that python uses this type of indexing to create arrays as shown in the following figure:
https://www.google.com.eg/search?q=python+indexing+arrays+numpy&biw=1555&bih=805&source=lnms&tbm=isch&sa=X&ved=0ahUKEwia7b2J1qzOAhUFPBQKHXtdCBkQ_AUIBygC#imgrc=7JQu1w_4TCaAnM%3A
And, there are many ways to store your data, you can choose order='F' to count the columns first as matlab does, while the default is order='C' that count the rows first....

Operations on 'N' dimensional numpy arrays

I am attempting to generalize some Python code to operate on arrays of arbitrary dimension. The operations are applied to each vector in the array. So for a 1D array, there is simply one operation, for a 2-D array it would be both row and column-wise (linearly, so order does not matter). For example, a 1D array (a) is simple:
b = operation(a)
where 'operation' is expecting a 1D array. For a 2D array, the operation might proceed as
for ii in range(0,a.shape[0]):
b[ii,:] = operation(a[ii,:])
for jj in range(0,b.shape[1]):
c[:,ii] = operation(b[:,ii])
I would like to make this general where I do not need to know the dimension of the array beforehand, and not have a large set of if/elif statements for each possible dimension.
Solutions that are general for 1 or 2 dimensions are ok, though a completely general solution would be preferred. In reality, I do not imagine needing this for any dimension higher than 2, but if I can see a general example I will learn something!
Extra information:
I have a matlab code that uses cells to do something similar, but I do not fully understand how it works. In this example, each vector is rearranged (basically the same function as fftshift in numpy.fft). Not sure if this helps, but it operates on an array of arbitrary dimension.
function aout=foldfft(ain)
nd = ndims(ain);
for k = 1:nd
nx = size(ain,k);
kx = floor(nx/2);
idx{k} = [kx:nx 1:kx-1];
end
aout = ain(idx{:});
In Octave, your MATLAB code does:
octave:19> size(ain)
ans =
2 3 4
octave:20> idx
idx =
{
[1,1] =
1 2
[1,2] =
1 2 3
[1,3] =
2 3 4 1
}
and then it uses the idx cell array to index ain. With these dimensions it 'rolls' the size 4 dimension.
For 5 and 6 the index lists would be:
2 3 4 5 1
3 4 5 6 1 2
The equivalent in numpy is:
In [161]: ain=np.arange(2*3*4).reshape(2,3,4)
In [162]: idx=np.ix_([0,1],[0,1,2],[1,2,3,0])
In [163]: idx
Out[163]:
(array([[[0]],
[[1]]]), array([[[0],
[1],
[2]]]), array([[[1, 2, 3, 0]]]))
In [164]: ain[idx]
Out[164]:
array([[[ 1, 2, 3, 0],
[ 5, 6, 7, 4],
[ 9, 10, 11, 8]],
[[13, 14, 15, 12],
[17, 18, 19, 16],
[21, 22, 23, 20]]])
Besides the 0 based indexing, I used np.ix_ to reshape the indexes. MATLAB and numpy use different syntax to index blocks of values.
The next step is to construct [0,1],[0,1,2],[1,2,3,0] with code, a straight forward translation.
I can use np.r_ as a short cut for turning 2 slices into an index array:
In [201]: idx=[]
In [202]: for nx in ain.shape:
kx = int(np.floor(nx/2.))
kx = kx-1;
idx.append(np.r_[kx:nx, 0:kx])
.....:
In [203]: idx
Out[203]: [array([0, 1]), array([0, 1, 2]), array([1, 2, 3, 0])]
and pass this through np.ix_ to make the appropriate index tuple:
In [204]: ain[np.ix_(*idx)]
Out[204]:
array([[[ 1, 2, 3, 0],
[ 5, 6, 7, 4],
[ 9, 10, 11, 8]],
[[13, 14, 15, 12],
[17, 18, 19, 16],
[21, 22, 23, 20]]])
In this case, where 2 dimensions don't roll anything, slice(None) could replace those:
In [210]: idx=(slice(None),slice(None),[1,2,3,0])
In [211]: ain[idx]
======================
np.roll does:
indexes = concatenate((arange(n - shift, n), arange(n - shift)))
res = a.take(indexes, axis)
np.apply_along_axis is another function that constructs an index array (and turns it into a tuple for indexing).
If you are looking for a programmatic way to index the k-th dimension an n-dimensional array, then numpy.take might help you.
An implementation of foldfft is given below as an example:
In[1]:
import numpy as np
def foldfft(ain):
result = ain
nd = len(ain.shape)
for k in range(nd):
nx = ain.shape[k]
kx = (nx+1)//2
shifted_index = list(range(kx,nx)) + list(range(kx))
result = np.take(result, shifted_index, k)
return result
a = np.indices([3,3])
print("Shape of a = ", a.shape)
print("\nStarting array:\n\n", a)
print("\nFolded array:\n\n", foldfft(a))
Out[1]:
Shape of a = (2, 3, 3)
Starting array:
[[[0 0 0]
[1 1 1]
[2 2 2]]
[[0 1 2]
[0 1 2]
[0 1 2]]]
Folded array:
[[[2 0 1]
[2 0 1]
[2 0 1]]
[[2 2 2]
[0 0 0]
[1 1 1]]]
You could use numpy.ndarray.flat, which allows you to linearly iterate over a n dimensional numpy array. Your code should then look something like this:
b = np.asarray(x)
for i in range(len(x.flat)):
b.flat[i] = operation(x.flat[i])
The folks above provided multiple appropriate solutions. For completeness, here is my final solution. In this toy example for the case of 3 dimensions, the function 'ops' replaces the first and last element of a vector with 1.
import numpy as np
def ops(s):
s[0]=1
s[-1]=1
return s
a = np.random.rand(4,4,3)
print '------'
print 'Array a'
print a
print '------'
for ii in np.arange(a.ndim):
a = np.apply_along_axis(ops,ii,a)
print '------'
print ' Axis',str(ii)
print a
print '------'
print ' '
The resulting 3D array has a 1 in every element on the 'border' with the numbers in the middle of the array unchanged. This is of course a toy example; however ops could be any arbitrary function that operates on a 1D vector.
Flattening the vector will also work; I chose not to pursue that simply because the book-keeping is more difficult and apply_along_axis is the simplest approach.
apply_along_axis reference page

Categories

Resources