Understanding axis in Python - python

I have seen a couple of codes using numpy.apply_along_axis and I always have to test the codes to see how this works 'cause I didn't understand the axis idea in Python yet.
For example, I tested this simple codes from the reference.
I can see that for the first case it was took the first column of each row of the matrix, and in the second case, the row itself was considered.
So I build an example to test how this works with an array of matrices (the problem that took me to this axis question), which can also be seen as a 3d matrix, where each row is a matrix, right?
a = [[[1,2,3],[2,3,4]],[[4,5,6],[9,8,7]]]
import numpy
data = numpy.array([b for b in a])
def my_func(x):
return (x[0] + x[-1]) * 0.5
b = numpy.apply_along_axis(my_func, 0, data)
b = numpy.apply_along_axis(my_func, 1, data)
Which gave me:
array([[ 2.5, 3.5, 4.5],
[ 5.5, 5.5, 5.5]])
And:
array([[ 1.5, 2.5, 3.5],
[ 6.5, 6.5, 6.5]])
For the first result I got what I expected. But for the second one, I though I would receive:
array([[ 2., 3.],
[ 5., 8.]])
Then I though that maybe should be an axis=2 and I got the previous result testing it. So, I'm wondering how this works to work it properly.
Thank you.

First, data=numpy.array(a) is already enough, no need to use numpy.array([b for b in a]).
data is now a 3D ndarray with the shape (2,2,3), and has 3 axes 0, 1, 2. The first axis has a length of 2, the second axis's length is also 2 and the third axis's length is 3.
Therefore both numpy.apply_along_axis(my_func, 0, data) and numpy.apply_along_axis(my_func, 1, data) will result in a 2D array of shape (2,3). In both cases the shape is (2,3), those of the remaining axes, 2nd and 3rd or 1st and 3rd.
numpy.apply_along_axis(my_func, 2, data) returns the (2,2) shape array you showed, where (2,2) is the shape of the first 2 axes, as you apply along the 3rd axis (by giving index 2).
The way to understand it is whichever axis you apply along will be 'collapsed' into the shape of your my_func, which in this case returns a single value. The order and shape of the remaining axis will remain unchanged.
The alternative way to think of it is: apply_along_axis means apply that function to the values on that axis, for each combination of the remaining axis/axes. Fetch the result, and organize them back into the shape of the remaining axis/axes. So, if my_func returns a tuple of 4 values:
def my_func(x):
return (x[0] + x[-1]) * 2,1,1,1
we will expect numpy.apply_along_axis(my_func, 0, data).shape to be (4,2,3).
See also numpy.apply_over_axes for applying a function repeatedly over multiple axes

Let there be an array of shape (2,2,3). It can be seen that axis 0, axis 1, axis 2 has 2 ,2, 3 data values respectively. These are the indexes of the elements of the array
[
[
[(0,0,0) (0,0,1), (0,0,2)],
[(0,1,0) (0,1,1), (0,1,2)]
],
[
[(1,0,0) (1,0,1), (1,0,2)],
[(1,1,0) (1,1,1), (1,1,2)]
]
]
Now if you apply some operation along some axis, then vary the indexes along this axis only keeping the indices along the two other axis constant.
Example: If we apply some operation F along axis 0, then the elements of the result would be
[
[F((0,0,0),(1,0,0)), F((0,0,1),(1,0,1)), F((0,0,2),(1,0,2))],
[F((0,1,0),(1,1,0)), F((0,1,1),(1,1,1)), F((0,1,2),(1,1,2))]
]
Along axis 1:
[
[F((0,0,0),(0,1,0)), F((0,0,1),(0,1,1)), F((0,0,2),(0,1,2))],
[F((0,1,0),(1,1,0)), F((0,1,1),(1,1,1)), F((0,1,2),(1,1,2))]
]
Along axis 2:
[
[F((0,0,0),(0,0,1),(0,0,2)), F((0,1,0),(0,1,1),(0,1,2))],
[F((1,0,0),(1,0,1),(1,0,2)), F((1,1,0),(1,1,1),(1,1,2))]
]
Also the shape of the resulting array can be inferred by omitting the given axis in the shape of given data.

Perhaps checking the shape of your array will help clarify which axis is which;
print data.shape
>> (2,2,3)
This means that calling
numpy.apply_along_axis(my_func, 2, data)
should indeed give a 2x2 matrix, namely
array([[ 2., 3.],
[ 5., 8.]])
because the 3rd axis (index 2) has length 3, while the remaining axes are both length 2.

Related

lack of knowledge what do dimesions really represent

2d array, consists of 2 axes, axis=0 which represents the rows and the axis=1 represents the columns
aa = np.random.randn(10, 2) # Here is 2d array, first axis has 10 rows and second axis has 2 columns
array([[ 0.6999521 , -0.17597954],
[ 1.70622947, -0.85919459],
[-0.90019284, 0.80774052],
[-1.42953238, 0.19727917],
[-0.03416532, 0.49584749],
[-0.28981586, -0.77484498],
[-1.31129122, 0.423833 ],
[-0.43920016, -1.93541758],
[-0.06667634, 2.09925218],
[ 1.24633485, -0.04153847]])
why when I want to scatter the points I only consider the first column and the second column dimension from axis=1? do dimensions mean columns when plotting and at other times they mean axes? can you please explain more the reasons to do it like this? and if there are good references I could benefit myself on dimensions relating to this
plt.scatter(x[:,0], x[:,1]) # this also means dimensions or columns?
x[:,0], x[:,1] why not do x[0,:], x[:,1}
It can be difficult to visualize this, especially in multiple dimensions.
The parameters to the [] operator represent the dimensions. Your first dimension is the rows. The first row is array[0]. Your second dimension is the columns. The entire second column is called array[:,1] -- the ":" is a numpy notation that means "take all of this dimension". array[2,1] refers to the second column in the third row.
plt.scatter expects the x coordinate values as its first parameter, and the y coordinate values as its second parameter. plt.scatter(x[:,0], x[:,1]) means "take all of column 0" and "take all of column 1", which is the way scatter wants them.
With this randn call you make a 2d array with the specified shape. The dimensions, 10 and 2, don't represent anything - that's an abstract (10,2) array. Meaning comes from how you use it.
In [50]: aa = np.random.randn(10, 2)
In [51]: aa
Out[51]:
array([[-0.26769106, 0.09882999],
[-1.5605514 , -1.38614473],
[ 1.23312852, 0.86838848],
[ 1.2603898 , 2.19895989],
[-1.66937976, 0.79666952],
[-0.15596669, 1.47848784],
[ 1.74964902, 0.39280584],
[-1.0982447 , 0.46888408],
[ 0.84396231, -0.34809148],
[-0.83489678, -1.8093045 ]])
That's a display - with rows and columns.
Rather than pass the slices directly to scatter lets assign them to variables:
In [52]: x = aa[:,0]; y = aa[:,1]; x,y
Out[52]:
(array([-0.26769106, -1.5605514 , 1.23312852, 1.2603898 , -1.66937976,
-0.15596669, 1.74964902, -1.0982447 , 0.84396231, -0.83489678]),
array([ 0.09882999, -1.38614473, 0.86838848, 2.19895989, 0.79666952,
1.47848784, 0.39280584, 0.46888408, -0.34809148, -1.8093045 ]))
We now have two 1d arrays with shape (10,) (that's a 1 element tuple). We can then plot them with:
In [53]: plt.scatter(x,y)
I could just as well used
x = np.arange(10); y = np.random.randn(10)
to make two 1d arrays.
The dimensions of the aa array have nothing to do with the axes of a scatter plot.
I could select a 'row' of aa, but will only get a (2,) shape array. That can't be plotted against a (10,) array:
In [53]: aa[0,:]
Out[53]: array([-0.26769106, 0.09882999])
As for meaning of dimensions in sum/mean, why not experiement?
Sum all values:
In [54]: aa.sum()
Out[54]: 2.2598841819604134
sum down the columns, resulting in one value per column:
In [55]: aa.sum(axis=0)
Out[55]: array([-0.49960074, 2.75948492])
It can help to keepdims, producing a (1,2) array:
In [56]: aa.sum(axis=0, keepdims=True)
Out[56]: array([[-0.49960074, 2.75948492]])
or a (10,1) array:
In [57]: aa.sum(axis=1, keepdims=True)
Out[57]:
array([[-0.16886107],
[-2.94669614],
[ 2.101517 ],
[ 3.45934969],
[-0.87271024],
[ 1.32252115],
[ 2.14245486],
[-0.62936062],
[ 0.49587083],
[-2.64420128]])
There's some ambiguity when talking about summing along rows or columns when dealing with 2d arrays. It becomes clearer when we apply sum to 1d arrays (sum the only one), or 3d.
For example, note which dimension is missing when I do:
In [58]: np.arange(24).reshape(2,3,4).sum(axis=1).shape
Out[58]: (2, 4)
or
In [59]: np.arange(24).reshape(2,3,4).sum(axis=2)
Out[59]:
array([[ 6, 22, 38],
[54, 70, 86]])
Again - dimensions of numpy arrays are abstract things. An array can have 0, 1, 2 or more (up to 32) dimensions. Most of linear algebra deals with 2d arrays, matrices and "vectors". You can do LA with numpy, but numpy is used for much more.
edit
You could think of your aa as 10 2-element points. Then aa[:,0] are all the x coordinates. A mean with axis=0 would be the "center of mass" of those points.
In [60]: np.mean(aa, axis=0)
Out[60]: array([-0.04996007, 0.27594849])
Mean on axis=1 may not make sense, though you could calculate the norm of the points (sqrt(x^2+y^2)), or the length of the vectors represented by the points.
In [61]: np.linalg.norm(aa, axis=1)
Out[61]:
array([0.28535218, 2.08727523, 1.50821235, 2.53456249, 1.84973271,
1.48669159, 1.79320052, 1.19414978, 0.91292938, 1.99264533])
For direction of these points I'd use:
np.arctan2(aa[:,0], aa[:,1])
(or maybe switch the 0 and 1).

3d array to matrix multiplication

I have a matrix called vec with two columns, vec[:,0] and vec[:,1]. P contains two matrices, P[0,:,:] and P[1,:,:]. I want to mulitiply P[0,:,:] with the first column of vec and multiply P[1,:,:] with the second column of vec. However, the operation P#vec also gives me the matrix product of P[0,:,:] with the second column of vec and the matrix product of P[1,:,:] with the first column of vec, which slows my code.
Is it possible to directly compute the pairs column 1 to matrix 1 and column 2 to matrix 2 without the "off" products?
import numpy as np
P=np.arange(50).reshape(2, 5, 5)
vec=np.arange(10).reshape(5,2)
have=P#vec
want=np.column_stack((have[0,:,0],have[1,:,1]))
have,want
There is a very powerful function in numpy called np.einsum. It can perform all kind of tensor contractions, axis reordering and matrix multiplication. For your example you could use
res = np.einsum('nij,jn->in', P, vec)
after which res is exactly like want.
How does this work:
You give the np.einsum function both your arrays as well as a signature (that 'nij,jn->in' string) that tells the function how to multiply the arrays. In short, you want the third axis of the P tensor to be contracted with the first axis of vec. Therefore you choose the same index j in the signature string and leave it out in the part after the ->. A mere broadcast is done if indices appear on the left and right hand side of the ->, which is done here for the n and i indices.
A more complete explanation of this very powerful function with many examples of how to use it can be found at the corresponding numpy documentation.
#/matmul handles batches nicely, but the rules are that for 3d arrays, the first dimension is the batch, and dot is done on the last 2 dimensions, with the usual "last of A with the second to the last of B" pairing.
It took a bit of reading to decipher you description but it appears that you want the first of p to the batch, and last of vec to be the batch. That means vec needs to transformed to a (2,5,1) to work with the (2,5,5) p.
In [176]: P#vec.T[:,:,None]
Out[176]:
array([[[ 60],
[ 160],
[ 260],
[ 360],
[ 460]],
[[ 695],
[ 820],
[ 945],
[1070],
[1195]]])
The result is (2,5,1). We can squeeze out the the last to get (2,5), but apparently you want a (5,2)
In [179]: (P#vec.T[:,:,None])[...,0].T
Out[179]:
array([[ 60, 695],
[ 160, 820],
[ 260, 945],
[ 360, 1070],
[ 460, 1195]])
np.einsum('nij,jn->in', P, vec) does effectively the same, with the n as the batch dimension that is 'carried through' to the result, and sum-of-products on the shared j dimension.

Multiply arrays along a given axis [duplicate]

It seems I am getting lost in something potentially silly.
I have an n-dimensional numpy array, and I want to multiply it with a vector (1d array) along some dimension (which can change!).
As an example, say I want to multiply a 2d array by a 1d array along axis 0 of the first array, I can do something like this:
a=np.arange(20).reshape((5,4))
b=np.ones(5)
c=a*b[:,np.newaxis]
Easy, but I would like to extend this idea to n-dimensions (for a, while b is always 1d) and to any axis. In other words, I would like to know how to generate a slice with the np.newaxis at the right place. Say that a is 3d and I want to multiply along axis=1, I would like to generate the slice which would correctly give:
c=a*b[np.newaxis,:,np.newaxis]
I.e. given the number of dimensions of a (say 3), and the axis along which I want to multiply (say axis=1), how do I generate and pass the slice:
np.newaxis,:,np.newaxis
Thanks.
Solution Code -
import numpy as np
# Given axis along which elementwise multiplication with broadcasting
# is to be performed
given_axis = 1
# Create an array which would be used to reshape 1D array, b to have
# singleton dimensions except for the given axis where we would put -1
# signifying to use the entire length of elements along that axis
dim_array = np.ones((1,a.ndim),int).ravel()
dim_array[given_axis] = -1
# Reshape b with dim_array and perform elementwise multiplication with
# broadcasting along the singleton dimensions for the final output
b_reshaped = b.reshape(dim_array)
mult_out = a*b_reshaped
Sample run for a demo of the steps -
In [149]: import numpy as np
In [150]: a = np.random.randint(0,9,(4,2,3))
In [151]: b = np.random.randint(0,9,(2,1)).ravel()
In [152]: whos
Variable Type Data/Info
-------------------------------
a ndarray 4x2x3: 24 elems, type `int32`, 96 bytes
b ndarray 2: 2 elems, type `int32`, 8 bytes
In [153]: given_axis = 1
Now, we would like to perform elementwise multiplications along given axis = 1. Let's create dim_array:
In [154]: dim_array = np.ones((1,a.ndim),int).ravel()
...: dim_array[given_axis] = -1
...:
In [155]: dim_array
Out[155]: array([ 1, -1, 1])
Finally, reshape b & perform the elementwise multiplication:
In [156]: b_reshaped = b.reshape(dim_array)
...: mult_out = a*b_reshaped
...:
Check out the whos info again and pay special attention to b_reshaped & mult_out:
In [157]: whos
Variable Type Data/Info
---------------------------------
a ndarray 4x2x3: 24 elems, type `int32`, 96 bytes
b ndarray 2: 2 elems, type `int32`, 8 bytes
b_reshaped ndarray 1x2x1: 2 elems, type `int32`, 8 bytes
dim_array ndarray 3: 3 elems, type `int32`, 12 bytes
given_axis int 1
mult_out ndarray 4x2x3: 24 elems, type `int32`, 96 bytes
Avoid copying data and waste resources!
Utilizing casting and views, instead of actually copying data N times into a new array with appropriate shape (as existing answers do) is way more memory efficient. Here is such a method (based on #ShuxuanXU's code):
def mult_along_axis(A, B, axis):
# ensure we're working with Numpy arrays
A = np.array(A)
B = np.array(B)
# shape check
if axis >= A.ndim:
raise AxisError(axis, A.ndim)
if A.shape[axis] != B.size:
raise ValueError(
"Length of 'A' along the given axis must be the same as B.size"
)
# np.broadcast_to puts the new axis as the last axis, so
# we swap the given axis with the last one, to determine the
# corresponding array shape. np.swapaxes only returns a view
# of the supplied array, so no data is copied unnecessarily.
shape = np.swapaxes(A, A.ndim-1, axis).shape
# Broadcast to an array with the shape as above. Again,
# no data is copied, we only get a new look at the existing data.
B_brc = np.broadcast_to(B, shape)
# Swap back the axes. As before, this only changes our "point of view".
B_brc = np.swapaxes(B_brc, A.ndim-1, axis)
return A * B_brc
You could build a slice object, and select the desired dimension in that:
import numpy as np
a = np.arange(18).reshape((3,2,3))
b = np.array([1,3])
ss = [None] * a.ndim
ss[1] = slice(None) # set the dimension along which to broadcast
print ss # [None, slice(None, None, None), None]
c = a*b[ss]
I got a similar demand when I was working on some numerical calculation.
Let's assume we have two arrays (A and B) and a user-specified 'axis'.
A is a multi-dimensional array.
B is a 1-d array.
The basic idea is to expand B so that A and B have the same shape. Here is the solution code
import numpy as np
from numpy.core._internal import AxisError
def multiply_along_axis(A, B, axis):
A = np.array(A)
B = np.array(B)
# shape check
if axis >= A.ndim:
raise AxisError(axis, A.ndim)
if A.shape[axis] != B.size:
raise ValueError("'A' and 'B' must have the same length along the given axis")
# Expand the 'B' according to 'axis':
# 1. Swap the given axis with axis=0 (just need the swapped 'shape' tuple here)
swapped_shape = A.swapaxes(0, axis).shape
# 2. Repeat:
# loop through the number of A's dimensions, at each step:
# a) repeat 'B':
# The number of repetition = the length of 'A' along the
# current looping step;
# The axis along which the values are repeated. This is always axis=0,
# because 'B' initially has just 1 dimension
# b) reshape 'B':
# 'B' is then reshaped as the shape of 'A'. But this 'shape' only
# contains the dimensions that have been counted by the loop
for dim_step in range(A.ndim-1):
B = B.repeat(swapped_shape[dim_step+1], axis=0)\
.reshape(swapped_shape[:dim_step+2])
# 3. Swap the axis back to ensure the returned 'B' has exactly the
# same shape of 'A'
B = B.swapaxes(0, axis)
return A * B
And here is an example
In [33]: A = np.random.rand(3,5)*10; A = A.astype(int); A
Out[33]:
array([[7, 1, 4, 3, 1],
[1, 8, 8, 2, 4],
[7, 4, 8, 0, 2]])
In [34]: B = np.linspace(3,7,5); B
Out[34]: array([3., 4., 5., 6., 7.])
In [35]: multiply_along_axis(A, B, axis=1)
Out[34]:
array([[21., 4., 20., 18., 7.],
[ 3., 32., 40., 12., 28.],
[21., 16., 40., 0., 14.]])
Simplifying #Neinstein's solution, I arrived at
def multiply_along_axis(A, B, axis):
return np.swapaxes(np.swapaxes(A, axis, -1) * B, -1, axis)
This example also avoids copying and wasting memory. The explicit broadcasting is avoided by swapping the desired axis in A to the last position, perform the multiplication, and than swapping the axis back to the original position. The additional advantage is that numpy takes care of the error handling and type conversion.
You could also use a simple matrices trick
c = np.matmul(a,diag(b))
basically just doing matrix multiplication between a and a matrix whose diagonals are the elements of b. Maybe not as efficient but it's a nice single line solution

Python numpy: Dimension [0] in vectors (n-dim) vs. arrays (nxn-dim)

I'm currently wondering how the numpy array behaves. I feel like the dimensions are not consistent from vectors (Nx1 dimensional) to 'real arrays' (NxN dimensional).
I dont get, why this isn't working:
a = array(([1,2],[3,4],[5,6]))
concatenate((a[:,0],a[:,1:]), axis = 1)
# ValueError: all the input arrays must have same number of dimensions
It seems like the : (at 1:]) makes the difference, but (:0 is not working)
Thanks in advance!
Detailled Version: So I would expect that shape(b)[0] references the vertical direction in (Nx1 arrays), like in an 2D (NxN) array. But it seems like dimension [0] is the horizontal direction in arrays (Nx1 arrays)?
from numpy import *
a = array(([1,2],[3,4],[5,6]))
b = a[:,0]
print shape(a) # (3L, 2L), [0] is vertical
print a # [1,2],[3,4],[5,6]
print shape(b) # (3L, ), [0] is horizontal
print b # [1 3 5]
c = b * ones((shape(b)[0],1))
print shape(c) # (3L, 3L), I'd expect (3L, 1L)
print c # [[ 1. 3. 5.], [ 1. 3. 5.], [ 1. 3. 5.]]
What did I get wrong? Is there a nicer way than
d = b * ones((1, shape(b)[0]))
d = transpose(d)
print shape(d) # (3L, 1L)
print d # [[ 1.], [ 3.], [ 5.]]
to get the (Nx1) vector that I expect or want?
There are two overall issues here. First, b is not an (N, 1) shaped array, it is an (N,) shaped array. In numpy, 1D and 2D arrays are different things. 1D arrays simply have no direction. Vertical vs. horizontal, rows vs. columns, these are 2D concepts.
The second has to do with something called "broadcasting". In numpy arrays, you are able to broadcast lower-dimensional arrays to higher-dimensional ones, and the lower-dimensional part is applied elementwise to the higher-dimensional one.
The broadcasting rules are pretty simple:
When operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing dimensions, and works its way forward. Two dimensions are compatible when
they are equal, or
one of them is 1
In your case, it starts with the last dimension of ones((shape(b)[0],1)), which is 1. This meets the second criteria. So it multiplies the array b elementwise for each element of ones((shape(b)[0],1)), resulting in a 3D array.
So it is roughly equivalent to:
c = np.array([x*b for x in ones(shape(b))])
Edit:
To answer your original question, what you want to do is to keep both the first and second arrays as 2D arrays.
numpy has a very simple rule for this: indexing reduces the number of dimensions, slicing doesn't. So all you need is to have a length-1 slice. So in your example, just change a[:,0] to a[:,:1]. This means 'get every column up to the second one'. Of course that only includes the first column, but it is still considered a slice operation rather than getting an element, so it still preservers the number of dimensions:
>>> print(a[:, 0])
[1 3 5]
>>> print(a[:, 0].shape)
(3,)
>>> print(a[:, :1])
[[1]
[3]
[5]]
>>> print(a[:, :1].shape)
(3, 1)
>>> print(concatenate((a[:,:1],a[:,1:]), axis = 1))
[[1 2]
[3 4]
[5 6]]

Using .mean() in numpy

I was given code and I'm familiar with numpy, but this one line really has me stuck looking for an answer.
plt.contourf(lat,lev,T.mean(0).mean(-1),extend='both')
T is a 4 dimensional variable dependent on time, lat, lon, lev.
My question is, what does the T.mean(0).mean(-1) do?
Thanks!
The value passed to mean specifies the axis along which to take the mean. Therefore, T.mean(0) takes the mean along the 0th axis and returns a 3D array. The .mean(-1) then performs the mean along the last axis of the newly created 3D array, returning a 2D array.
Which is conveniently ideal for contourf.
It's the axis along which to take the mean.
>>> import numpy
>>> arr = numpy.array([[1,2,3,4],[5,6,7,8]])
>>> arr.mean(0) == [(1+5)/2, (2+6)/2, (3+7)/2, (4+8)/2]
array([ True, True, True, True], dtype=bool)
>>> arr.mean(1) == [(1+2+3+4)/4, (5+6+7+8)/4]
array([ True, True], dtype=bool)
Here are some examples, which I hope will explain what is going on:
In [191]:
#the data
a=np.random.random((3,3,3))
print a
[[[ 0.21715561 0.23384392 0.21248607]
[ 0.10788638 0.61387072 0.56579586]
[ 0.6027137 0.77929822 0.80993495]]
[[ 0.36330373 0.26790271 0.79011397]
[ 0.01571846 0.99187387 0.1301911 ]
[ 0.18856381 0.09577381 0.03728304]]
[[ 0.18849473 0.16550599 0.41999887]
[ 0.65009076 0.39260551 0.92284577]
[ 0.92642505 0.46513472 0.77273484]]]
In [192]:
#mean() returns the grand mean
a.mean()
Out[192]:
0.44176096869094533
In [193]:
#mean(0) returns the mean along the 1st axis
a.mean(0)
Out[193]:
array([[ 0.25631803, 0.22241754, 0.47419964],
[ 0.25789853, 0.6661167 , 0.53961091],
[ 0.57256752, 0.44673558, 0.53998427]])
In [195]:
#what is this doing?
a.mean(-1)
Out[195]:
array([[ 0.22116187, 0.42918432, 0.73064896],
[ 0.47377347, 0.37926114, 0.10720688],
[ 0.25799986, 0.65518068, 0.72143154]])
In [196]:
#it is returning the mean along the last axis, in this case, the 3rd axis
a.mean(2)
Out[196]:
array([[ 0.22116187, 0.42918432, 0.73064896],
[ 0.47377347, 0.37926114, 0.10720688],
[ 0.25799986, 0.65518068, 0.72143154]])
In [197]:
#Ok, this is now clear: calculate the mean along the 1st axis first, then calculate the mean along the last axis of the resultant.
a.mean(0).mean(-1)
Out[197]:
array([ 0.31764507, 0.48787538, 0.51976246])
IMO, using T as a variable name is probably not a good idea. .T() means transpose in numpy.

Categories

Resources