understanding numpy's dstack function - python

I have some trouble understanding what numpy's dstack function is actually doing. The documentation is rather sparse and just says:
Stack arrays in sequence depth wise (along third axis).
Takes a sequence of arrays and stack them along the third axis
to make a single array. Rebuilds arrays divided by dsplit.
This is a simple way to stack 2D arrays (images) into a single
3D array for processing.
So either I am really stupid and the meaning of this is obvious or I seem to have some misconception about the terms 'stacking', 'in sequence', 'depth wise' or 'along an axis'. However, I was of the impression that I understood these terms in the context of vstack and hstack just fine.
Let's take this example:
In [193]: a
Out[193]:
array([[0, 3],
[1, 4],
[2, 5]])
In [194]: b
Out[194]:
array([[ 6, 9],
[ 7, 10],
[ 8, 11]])
In [195]: dstack([a,b])
Out[195]:
array([[[ 0, 6],
[ 3, 9]],
[[ 1, 7],
[ 4, 10]],
[[ 2, 8],
[ 5, 11]]])
First of all, a and b don't have a third axis so how would I stack them along 'the third axis' to begin with? Second of all, assuming a and b are representations of 2D-images, why do I end up with three 2D arrays in the result as opposed to two 2D-arrays 'in sequence'?

It's easier to understand what np.vstack, np.hstack and np.dstack* do by looking at the .shape attribute of the output array.
Using your two example arrays:
print(a.shape, b.shape)
# (3, 2) (3, 2)
np.vstack concatenates along the first dimension...
print(np.vstack((a, b)).shape)
# (6, 2)
np.hstack concatenates along the second dimension...
print(np.hstack((a, b)).shape)
# (3, 4)
and np.dstack concatenates along the third dimension.
print(np.dstack((a, b)).shape)
# (3, 2, 2)
Since a and b are both two dimensional, np.dstack expands them by inserting a third dimension of size 1. This is equivalent to indexing them in the third dimension with np.newaxis (or alternatively, None) like this:
print(a[:, :, np.newaxis].shape)
# (3, 2, 1)
If c = np.dstack((a, b)), then c[:, :, 0] == a and c[:, :, 1] == b.
You could do the same operation more explicitly using np.concatenate like this:
print(np.concatenate((a[..., None], b[..., None]), axis=2).shape)
# (3, 2, 2)
* Importing the entire contents of a module into your global namespace using import * is considered bad practice for several reasons. The idiomatic way is to import numpy as np.

Let x == dstack([a, b]). Then x[:, :, 0] is identical to a, and x[:, :, 1] is identical to b. In general, when dstacking 2D arrays, dstack produces an output such that output[:, :, n] is identical to the nth input array.
If we stack 3D arrays rather than 2D:
x = numpy.zeros([2, 2, 3])
y = numpy.ones([2, 2, 4])
z = numpy.dstack([x, y])
then z[:, :, :3] would be identical to x, and z[:, :, 3:7] would be identical to y.
As you can see, we have to take slices along the third axis to recover the inputs to dstack. That's why dstack behaves the way it does.

I'd like to take a stab at visually explaining this (even though the accepted answer makes enough sense, it took me a few seconds to rationalise this to my mind).
If we imagine the 2d-arrays as a list of lists, where the 1st axis gives one of the inner lists and the 2nd axis gives the value in that list, then the visual representation of the OP's arrays will be this:
a = [
[0, 3],
[1, 4],
[2, 5]
]
b = [
[6, 9],
[7, 10],
[8, 11]
]
# Shape of each array is [3,2]
Now, according to the current documentation, the dstack function adds a 3rd axis, which means each of the arrays end up looking like this:
a = [
[[0], [3]],
[[1], [4]],
[[2], [5]]
]
b = [
[[6], [9]],
[[7], [10]],
[[8], [11]]
]
# Shape of each array is [3,2,1]
Now, stacking both these arrays in the 3rd dimension simply means that the result should look, as expected, like this:
dstack([a,b]) = [
[[0, 6], [3, 9]],
[[1, 7], [4, 10]],
[[2, 8], [5, 11]]
]
# Shape of the combined array is [3,2,2]
Hope this helps.

Because you mention "images", I think this example would be useful. If you're using Keras to train a 2D convolution network with the input X, then it is best to keep X with the dimension (#images, dim1ofImage, dim2ofImage).
image1 = np.array([[4,2],[5,5]])
image2 = np.array([[3,1],[6,7]])
image1 = image1.reshape(1,2,2)
image2 = image2.reshape(1,2,2)
X = np.stack((image1,image2),axis=1)
X
array([[[[4, 2],
[5, 5]],
[[3, 1],
[6, 7]]]])
np.shape(X)
X = X.reshape((2,2,2))
X
array([[[4, 2],
[5, 5]],
[[3, 1],
[6, 7]]])
X[0] # image 1
array([[4, 2],
[5, 5]])
X[1] # image 2
array([[3, 1],
[6, 7]])

Related

Repeat specific row or column of Python numpy 2D array [duplicate]

I'd like to copy a numpy 2D array into a third dimension. For example, given the 2D numpy array:
import numpy as np
arr = np.array([[1, 2], [1, 2]])
# arr.shape = (2, 2)
convert it into a 3D matrix with N such copies in a new dimension. Acting on arr with N=3, the output should be:
new_arr = np.array([[[1, 2], [1,2]],
[[1, 2], [1, 2]],
[[1, 2], [1, 2]]])
# new_arr.shape = (3, 2, 2)
Probably the cleanest way is to use np.repeat:
a = np.array([[1, 2], [1, 2]])
print(a.shape)
# (2, 2)
# indexing with np.newaxis inserts a new 3rd dimension, which we then repeat the
# array along, (you can achieve the same effect by indexing with None, see below)
b = np.repeat(a[:, :, np.newaxis], 3, axis=2)
print(b.shape)
# (2, 2, 3)
print(b[:, :, 0])
# [[1 2]
# [1 2]]
print(b[:, :, 1])
# [[1 2]
# [1 2]]
print(b[:, :, 2])
# [[1 2]
# [1 2]]
Having said that, you can often avoid repeating your arrays altogether by using broadcasting. For example, let's say I wanted to add a (3,) vector:
c = np.array([1, 2, 3])
to a. I could copy the contents of a 3 times in the third dimension, then copy the contents of c twice in both the first and second dimensions, so that both of my arrays were (2, 2, 3), then compute their sum. However, it's much simpler and quicker to do this:
d = a[..., None] + c[None, None, :]
Here, a[..., None] has shape (2, 2, 1) and c[None, None, :] has shape (1, 1, 3)*. When I compute the sum, the result gets 'broadcast' out along the dimensions of size 1, giving me a result of shape (2, 2, 3):
print(d.shape)
# (2, 2, 3)
print(d[..., 0]) # a + c[0]
# [[2 3]
# [2 3]]
print(d[..., 1]) # a + c[1]
# [[3 4]
# [3 4]]
print(d[..., 2]) # a + c[2]
# [[4 5]
# [4 5]]
Broadcasting is a very powerful technique because it avoids the additional overhead involved in creating repeated copies of your input arrays in memory.
* Although I included them for clarity, the None indices into c aren't actually necessary - you could also do a[..., None] + c, i.e. broadcast a (2, 2, 1) array against a (3,) array. This is because if one of the arrays has fewer dimensions than the other then only the trailing dimensions of the two arrays need to be compatible. To give a more complicated example:
a = np.ones((6, 1, 4, 3, 1)) # 6 x 1 x 4 x 3 x 1
b = np.ones((5, 1, 3, 2)) # 5 x 1 x 3 x 2
result = a + b # 6 x 5 x 4 x 3 x 2
Another way is to use numpy.dstack. Supposing that you want to repeat the matrix a num_repeats times:
import numpy as np
b = np.dstack([a]*num_repeats)
The trick is to wrap the matrix a into a list of a single element, then using the * operator to duplicate the elements in this list num_repeats times.
For example, if:
a = np.array([[1, 2], [1, 2]])
num_repeats = 5
This repeats the array of [1 2; 1 2] 5 times in the third dimension. To verify (in IPython):
In [110]: import numpy as np
In [111]: num_repeats = 5
In [112]: a = np.array([[1, 2], [1, 2]])
In [113]: b = np.dstack([a]*num_repeats)
In [114]: b[:,:,0]
Out[114]:
array([[1, 2],
[1, 2]])
In [115]: b[:,:,1]
Out[115]:
array([[1, 2],
[1, 2]])
In [116]: b[:,:,2]
Out[116]:
array([[1, 2],
[1, 2]])
In [117]: b[:,:,3]
Out[117]:
array([[1, 2],
[1, 2]])
In [118]: b[:,:,4]
Out[118]:
array([[1, 2],
[1, 2]])
In [119]: b.shape
Out[119]: (2, 2, 5)
At the end we can see that the shape of the matrix is 2 x 2, with 5 slices in the third dimension.
Use a view and get free runtime! Extend generic n-dim arrays to n+1-dim
Introduced in NumPy 1.10.0, we can leverage numpy.broadcast_to to simply generate a 3D view into the 2D input array. The benefit would be no extra memory overhead and virtually free runtime. This would be essential in cases where the arrays are big and we are okay to work with views. Also, this would work with generic n-dim cases.
I would use the word stack in place of copy, as readers might confuse it with the copying of arrays that creates memory copies.
Stack along first axis
If we want to stack input arr along the first axis, the solution with np.broadcast_to to create 3D view would be -
np.broadcast_to(arr,(3,)+arr.shape) # N = 3 here
Stack along third/last axis
To stack input arr along the third axis, the solution to create 3D view would be -
np.broadcast_to(arr[...,None],arr.shape+(3,))
If we actually need a memory copy, we can always append .copy() there. Hence, the solutions would be -
np.broadcast_to(arr,(3,)+arr.shape).copy()
np.broadcast_to(arr[...,None],arr.shape+(3,)).copy()
Here's how the stacking works for the two cases, shown with their shape information for a sample case -
# Create a sample input array of shape (4,5)
In [55]: arr = np.random.rand(4,5)
# Stack along first axis
In [56]: np.broadcast_to(arr,(3,)+arr.shape).shape
Out[56]: (3, 4, 5)
# Stack along third axis
In [57]: np.broadcast_to(arr[...,None],arr.shape+(3,)).shape
Out[57]: (4, 5, 3)
Same solution(s) would work to extend a n-dim input to n+1-dim view output along the first and last axes. Let's explore some higher dim cases -
3D input case :
In [58]: arr = np.random.rand(4,5,6)
# Stack along first axis
In [59]: np.broadcast_to(arr,(3,)+arr.shape).shape
Out[59]: (3, 4, 5, 6)
# Stack along last axis
In [60]: np.broadcast_to(arr[...,None],arr.shape+(3,)).shape
Out[60]: (4, 5, 6, 3)
4D input case :
In [61]: arr = np.random.rand(4,5,6,7)
# Stack along first axis
In [62]: np.broadcast_to(arr,(3,)+arr.shape).shape
Out[62]: (3, 4, 5, 6, 7)
# Stack along last axis
In [63]: np.broadcast_to(arr[...,None],arr.shape+(3,)).shape
Out[63]: (4, 5, 6, 7, 3)
and so on.
Timings
Let's use a large sample 2D case and get the timings and verify output being a view.
# Sample input array
In [19]: arr = np.random.rand(1000,1000)
Let's prove that the proposed solution is a view indeed. We will use stacking along first axis (results would be very similar for stacking along the third axis) -
In [22]: np.shares_memory(arr, np.broadcast_to(arr,(3,)+arr.shape))
Out[22]: True
Let's get the timings to show that it's virtually free -
In [20]: %timeit np.broadcast_to(arr,(3,)+arr.shape)
100000 loops, best of 3: 3.56 µs per loop
In [21]: %timeit np.broadcast_to(arr,(3000,)+arr.shape)
100000 loops, best of 3: 3.51 µs per loop
Being a view, increasing N from 3 to 3000 changed nothing on timings and both are negligible on timing units. Hence, efficient both on memory and performance!
This can now also be achived using np.tile as follows:
import numpy as np
a = np.array([[1,2],[1,2]])
b = np.tile(a,(3, 1,1))
b.shape
(3,2,2)
b
array([[[1, 2],
[1, 2]],
[[1, 2],
[1, 2]],
[[1, 2],
[1, 2]]])
A=np.array([[1,2],[3,4]])
B=np.asarray([A]*N)
Edit #Mr.F, to preserve dimension order:
B=B.T
Here's a broadcasting example that does exactly what was requested.
a = np.array([[1, 2], [1, 2]])
a=a[:,:,None]
b=np.array([1]*5)[None,None,:]
Then b*a is the desired result and (b*a)[:,:,0] produces array([[1, 2],[1, 2]]), which is the original a, as does (b*a)[:,:,1], etc.
Summarizing the solutions above:
a = np.arange(9).reshape(3,-1)
b = np.repeat(a[:, :, np.newaxis], 5, axis=2)
c = np.dstack([a]*5)
d = np.tile(a, [5,1,1])
e = np.array([a]*5)
f = np.repeat(a[np.newaxis, :, :], 5, axis=0) # np.repeat again
print('b='+ str(b.shape), b[:,:,-1].tolist())
print('c='+ str(c.shape),c[:,:,-1].tolist())
print('d='+ str(d.shape),d[-1,:,:].tolist())
print('e='+ str(e.shape),e[-1,:,:].tolist())
print('f='+ str(f.shape),f[-1,:,:].tolist())
b=(3, 3, 5) [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
c=(3, 3, 5) [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
d=(5, 3, 3) [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
e=(5, 3, 3) [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
f=(5, 3, 3) [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
Good luck

how do I add two numpy arrays correctly?

np_mat = np.array([[1, 2], [3, 4], [5, 6]])
np_mat + np.array([10, 10])
I am confused what the difference between np.array([10, 10]) and np.array([[10, 10]]) is. In school I learnt that only matrices with the same dimensions can be added. When I use the shape method on np.array([10, 10]) it gives me (2,)...what does that mean? How is it possible to add np_mat and np.array([10, 10])? The dimensions don't look the same to me. What do I not understand?
It looks like numpy is bending the rules of mathematics here. Indeed, it sums the second matrix [10, 10] with each element of the first [[1, 2], [3, 4], [5, 6]].
This is called https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html. The shape of [10, 10] is (2, ) (that is, mathematically, 2) and that of [[1, 2], [3, 4], [5, 6]] is (3, 2) (that is, mathematically, 3 x 2). Therefore, from general broadcasting rules, you should get a result of shape (3, 2) (that is, mathematically, 3 x 2).
I am confused what the difference between np.array([10, 10]) and np.array([[10, 10]]) is.
The first is an array. The second is an array of arrays (in memory, it is in fact one single array, but this is not relevant here). You could think of the first as a column vector (a matrix of size 2 x 1) and the second as a line vector (a matrix of size 1 x 2). However, be warned that the distinction between line and column vectors is irrelevant in mathematics until you start interpreting vectors as matrices.
You cannot add two arrays of different sizes.
But those two both have first dimension, length, equal to 2. (that is len(a) == len(b))
Shape (2,) means that the array is one-dimensional and the first dimension is of size 2.
np.array([[1, 2], [3, 4], [5, 6]]) has shape (3, 2) which means two-dimensional (3x2).
But you can add them since they are of different dimensions, and numpy coerces a number to an arbitrary array full of this same number. This is called broadcasting in numpy.
I.e. your code gets equivalent results to:
np_mat = np.array([[1, 2], [3, 4], [5, 6]])
np_mat + 10
Or to:
np_mat = np.array([[1, 2], [3, 4], [5, 6]])
np_mat + np.array([[10, 10], [10, 10], [10, 10]])

numpy vectorize dimension increasing function

I would like to create a function that has input: x.shape==(2,2), and outputs y.shape==(2,2,3).
For example:
#np.vectorize
def foo(x):
#This function doesn't work like I want
return x,x,x
a = np.array([[1,2],[3,4]])
print(foo(a))
#desired output
[[[1 1 1]
[2 2 2]]
[[3 3 3]
[4 4 4]]]
#actual output
(array([[1, 2],
[3, 4]]), array([[1, 2],
[3, 4]]), array([[1, 2],
[3, 4]]))
Or maybe:
#np.vectorize
def bar(x):
#This function doesn't work like I want
return np.array([x,2*x,5])
a = np.array([[1,2],[3,4]])
print(bar(a))
#desired output
[[[1 2 5]
[2 4 5]]
[[3 6 5]
[4 8 5]]]
Note that foo is just an example. I want a way to map over a numpy array (which is what vectorize is supposed to do), but have that map take a 0d object and shove a 1d object in its place. It also seems to me that the dimensions here are arbitrary, as one might wish to take a function that takes a 1d object and returns a 3d object, vectorize it, call it on a 5d object, and get back a 7d object.... However, my specific use case only requires vectorizing a 0d to 1d function, and mapping it appropriately over a 2d array.
It would help, in your question, to show both the actual result and your desired result. As written that isn't very clear.
In [79]: foo(np.array([[1,2],[3,4]]))
Out[79]:
(array([[1, 2],
[3, 4]]), array([[1, 2],
[3, 4]]), array([[1, 2],
[3, 4]]))
As indicated in the vectorize docs, this has returned a tuple of arrays, corresponding to the tuple of values that your function returned.
Your bar returns an array, where as vectorize expected it to return a scalar (or single value):
In [82]: bar(np.array([[1,2],[3,4]]))
ValueError: setting an array element with a sequence.
vectorize takes an otypes parameter that sometimes helps. For example if I say that bar (without the wrapper) returns an object, I get:
In [84]: f=np.vectorize(bar, otypes=[object])
In [85]: f(np.array([[1,2],[3,4]]))
Out[85]:
array([[array([1, 2, 5]), array([2, 4, 5])],
[array([3, 6, 5]), array([4, 8, 5])]], dtype=object)
A (2,2) array of (3,) arrays. The (2,2) shape matches the shape of the input.
vectorize has a relatively new parameter, signature
In [90]: f=np.vectorize(bar, signature='()->(n)')
In [91]: f(np.array([[1,2],[3,4]]))
Out[91]:
array([[[1, 2, 5],
[2, 4, 5]],
[[3, 6, 5],
[4, 8, 5]]])
In [92]: _.shape
Out[92]: (2, 2, 3)
I haven't used this much, so am still getting a feel for how it works. When I've tested it, it is slower than the original scalar version of vectorize. Neither offers any speed advantage of explicit loops. However vectorize does help when 'broadcasting', allowing you to use a variety of input shapes. That's even more useful when your function takes several inputs, not just one as in this case.
In [94]: f(np.array([1,2]))
Out[94]:
array([[1, 2, 5],
[2, 4, 5]])
In [95]: f(np.array(3))
Out[95]: array([3, 6, 5])
For best speed, you want to use existing numpy whole-array functions where possible. For example your foo case can be done with:
In [97]: np.repeat(a[:,:,None],3, axis=2)
Out[97]:
array([[[1, 1, 1],
[2, 2, 2]],
[[3, 3, 3],
[4, 4, 4]]])
np.stack([a]*3, axis=2) also works.
And your bar desired result:
In [100]: np.stack([a, 2*a, np.full(a.shape, 5)], axis=2)
Out[100]:
array([[[1, 2, 5],
[2, 4, 5]],
[[3, 6, 5],
[4, 8, 5]]])
2*a takes advantage of the whole-array multiplication. That's true 'numpy-onic' thinking.
Just repeating the value into another dimension is quite simple:
import numpy as np
x = a = np.array([[1,2],[3,4]])
y = np.repeat(x[:,:,np.newaxis], 3, axis=2)
print y.shape
print y
(2L, 2L, 3L)
[[[1 1 1]
[2 2 2]]
[[3 3 3]
[4 4 4]]]
This seems to work for the "f R0 -> R1 mapped over a nd array giving a (n+1)d one"
def foo(x):
return np.concatenate((x,x))
np.apply_along_axis(foo,2,x.reshape(list(x.shape)+[1]))
doesn't generalize all that well, though

Numpy stack with unequal shapes

I've noticed that the solution to combining 2D arrays to 3D arrays through np.stack, np.dstack, or simply passing a list of arrays only works when the arrays have same .shape[0].
For instance, say I have:
print(arr)
[[0 1]
[2 3]
[4 5]
[6 7]
[8 9]]
it easy easy to get to:
print(np.array([arr[2:4], arr[3:5]])) # same shape
[[[4 5]
[6 7]]
[[6 7]
[8 9]]]
However, if I pass a list of arrays of unequal length, I get:
print(np.array([arr[:2], arr[:3]]))
[array([[0, 1],
[2, 3]])
array([[0, 1],
[2, 3],
[4, 5]])]
How can I get to simply:
[[[0, 1]
[2, 3]]
[[0, 1]
[2, 3]
[4, 5]]]
What I've tried: a number of other Array manipulation routines.
Note: ultimately want to do this for more than 2 arrays, so np.append is probably not ideal.
Numpy arrays have to be rectangular, so what you are trying to get is not possible with a numpy array.
You need a different data structure. Which one is suitable depends on what you want to do with that data.
I've made a function that works for this problem, assuming that you are willing to pad to make the shape rectangular, and you have arbitrarily higher multidimensional arrays. It could probably be optimised further, but it's not too bad.
import numpy as np
def stack_uneven(arrays, fill_value=0.):
'''
Fits arrays into a single numpy array, even if they are
different sizes. `fill_value` is the default value.
Args:
arrays: list of np arrays of various sizes
(must be same rank, but not necessarily same size)
fill_value (float, optional):
Returns:
np.ndarray
'''
sizes = [a.shape for a in arrays]
max_sizes = np.max(list(zip(*sizes)), -1)
# The resultant array has stacked on the first dimension
result = np.full((len(arrays),) + tuple(max_sizes), fill_value)
for i, a in enumerate(arrays):
# The shape of this array `a`, turned into slices
slices = tuple(slice(0,s) for s in sizes[i])
# Overwrite a block slice of `result` with this array `a`
result[i][slices] = a
return result
The only caveat to using this is that the input must able to be treated a sequence of numpy arrays. So for your example of
arr = np.array([[0, 1],
[2, 3],
[4, 5],
[6, 7],
[8, 9]])
stack_uneven([arr[:2], arr[:3]], 0)
This would give you
array([[[0, 1],
[2, 3],
[0, 0]],
[[0, 1],
[2, 3],
[4, 5]]])
But this works equally for higher dimensional things, like:
arr = [np.ones([3, 2, 2]), np.ones([2, 3, 2]), np.ones([2, 2, 3])]
The function np.stack joins multiple arrays along a new axis, not an existing one. See:
>>> import numpy as np
>>> arr = np.array(range(10)).reshape((5,2))
>>> print arr
[[0 1]
[2 3]
[4 5]
[6 7]
[8 9]]
>>> t1 = np.array([arr[2:4], arr[3:5]])
>>> print t1.shape
(2, 2, 2)
It's not creating a new array of shape (4,2) which I think you're intending. Look at np.concatenate for that.
Note if you really want to use stack, the docs require all input arrays be the same shape:
Parameters: arrays : sequence of array_like Each array must have the
same shape.
So what you're doing is going to have undefined behavior.
EDIT: I read too quickly. You are trying to add an axis. Still, you can't pass uneven shapes to stack. You would have to pad them all the the same shape. Example:
arr = np.array(range(10)).reshape((5,2))
print arr
arr_p1 = np.zeros(arr[0:3].shape)
arr_p1_src = arr[0:2]
arr_p1[:arr_p1_src.shape[0],:arr_p1_src.shape[1]] = arr_p1_src
t2 = np.array([arr_p1, arr[0:3]])
print t2
Output:
[[[ 0. 1.]
[ 2. 3.]
[ 0. 0.]]
[[ 0. 1.]
[ 2. 3.]
[ 4. 5.]]]
Eventually np.vstack or np.hstack can be useful, if you vertical or horizontal stack is enough for you and you have at least one equal dimension.

Efficiently change order of numpy array

I have a 3 dimensional numpy array. The dimension can go up to 128 x 64 x 8192. What I want to do is to change the order in the first dimension by interchanging pairwise.
The only idea I had so far is to create a list of the indices in the correct order.
order = [1,0,3,2...127,126]
data_new = data[order]
I fear, that this is not very efficient but I have no better idea so far
You could reshape to split the first axis into two axes, such that latter of those axes is of length 2 and then flip the array along that axis with [::-1] and finally reshape back to original shape.
Thus, we would have an implementation like so -
a.reshape(-1,2,*a.shape[1:])[:,::-1].reshape(a.shape)
Sample run -
In [170]: a = np.random.randint(0,9,(6,3))
In [171]: order = [1,0,3,2,5,4]
In [172]: a[order]
Out[172]:
array([[0, 8, 5],
[4, 5, 6],
[0, 0, 2],
[7, 3, 8],
[1, 6, 3],
[2, 4, 4]])
In [173]: a.reshape(-1,2,*a.shape[1:])[:,::-1].reshape(a.shape)
Out[173]:
array([[0, 8, 5],
[4, 5, 6],
[0, 0, 2],
[7, 3, 8],
[1, 6, 3],
[2, 4, 4]])
Alternatively, if you are looking to efficiently create those constantly flipping indices order, we could do something like this -
order = np.arange(data.shape[0]).reshape(-1,2)[:,::-1].ravel()

Categories

Resources