I'm trying to understand vectorized indexing in xarray by following this example from the docs:
import xarray as xr
import numpy as np
da = xr.DataArray(np.arange(12).reshape((3, 4)), dims=['x', 'y'],
coords={'x': [0, 1, 2], 'y': ['a', 'b', 'c', 'd']})
ind_x = xr.DataArray([0, 1], dims=['x'])
ind_y = xr.DataArray([0, 1], dims=['y'])
The output of the array da is as follows:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
So far so good. Now in the example there are shown two ways of indexing. Orthogonal (not interested in this case) and vectorized (what I want). For the vectorized indexing the following is shown:
In [37]: da[ind_x, ind_x] # vectorized indexing
Out[37]:
<xarray.DataArray (x: 2)>
array([0, 5])
Coordinates:
y (x) <U1 'a' 'b'
* x (x) int64 0 1
The result seems to be what I want, but this feels very strange to me. ind_x (which in theory refers to dims=['x']) is being passed twice but somehow is capable of indexing what appears to be both in the x and y dims. As far as I understand the x dim would be the rows and y dim would be the columns, is that correct? How come the same ind_x is capable of accessing both the rows and the cols?
This seems to be the concept I need for my problem, but can't understand how it works or how to extend it to more dimensions. I was expecting this result to be given by da[ind_x, ind_y] however that seems to yield the orthogonal indexing surprisingly enough.
Having the example with ind_x being used twice is probably a little confusing: actually, the dimension of the indexer doesn't have to matter at all for the indexing behavior! Observe:
ind_a = xr.DataArray([0, 1], dims=["a"]
da[ind_a, ind_a]
Gives:
<xarray.DataArray (a: 2)>
array([0, 5])
Coordinates:
x (a) int32 0 1
y (a) <U1 'a' 'b'
Dimensions without coordinates: a
The same goes for the orthogonal example:
ind_a = xr.DataArray([0, 1], dims=["a"])
ind_b = xr.DataArray([0, 1], dims=["b"])
da[ind_a, ind_b]
Result:
<xarray.DataArray (a: 2, b: 2)>
array([[0, 2],
[4, 6]])
Coordinates:
x (a) int32 0 1
y (b) <U1 'a' 'c'
Dimensions without coordinates: a, b
The difference is purely in terms of "labeling", as in this case you end up with dimensions without coordinates.
Fancy indexing
Generally stated, I personally do not find "fancy indexing" the most intuitive concept. I did find this example in NEP 21 pretty clarifying: https://numpy.org/neps/nep-0021-advanced-indexing.html
Specifically, this:
Consider indexing a 2D array by two 1D integer arrays, e.g., x[[0, 1], [0, 1]]:
Outer indexing is equivalent to combining multiple integer indices with itertools.product(). The result in this case is another 2D
array with all combinations of indexed elements, e.g.,
np.array([[x[0, 0], x[0, 1]], [x[1, 0], x[1, 1]]])
Vectorized indexing is equivalent to combining multiple integer
indices with zip(). The result in this case is a 1D array containing
the diagonal elements, e.g., np.array([x[0, 0], x[1, 1]]).
Back to xarray
da[ind_x, ind_y]
Can also be written as:
da.isel(x=ind_x, y=ind_y)
The dimensions are implicit in the order. However, xarray still attempts to broadcast (based on dimension labels), so da[ind_y] mismatches and results in an error. da[ind_a] and da[ind_b] both work.
More dimensions
The dims you provide for the indexer are what determines the shape of the output, not the dimensions of the array you're indexing.
If you want to select single values along the dimensions (so we're zip()-ing through the indexes simultaneously), just make sure that your indexers share the dimension, here for a 3D array:
da = xr.DataArray(
data=np.arange(3 * 4 * 5).reshape(3, 4, 5),
coords={
"x": [1, 2, 3],
"y": ["a", "b", "c", "d"],
"z": [1.0, 2.0, 3.0, 4.0, 5.0],
},
dims=["x", "y", "z"],
)
ind_along_x = xr.DataArray([0, 1], dims=["new_index"])
ind_along_y = xr.DataArray([0, 2], dims=["new_index"])
ind_along_z = xr.DataArray([0, 3], dims=["new_index"])
da[ind_along_x, ind_along_y, ind_along_z]
Note that the values of the indexers do not have to the same -- that would be a pretty severe limitation, after all.
Result:
<xarray.DataArray (new_index: 2)>
array([ 0, 33])
Coordinates:
x (new_index) int32 1 2
y (new_index) <U1 'a' 'c'
z (new_index) float64 1.0 4.0
Dimensions without coordinates: new_index
Related
I have a multidimensional xarray like:
testarray = xr.DataArray([[1,-2,3],[4,5,-6]])
and i want to get the indices for a specific condition, eg. where testarray is smaller then 0.
So the expected result should be an array like:
result = [[1,2],[0,1]]
Or any other format that let me get these indices for further calculations. Can't imagine, that there is no option within xarray for such an elementary problem, but i can't find it. Things like
testarray.where(testarray<0)
do some very ???suspicious??? stuff. Whats the use of an array thats the same but with nan's where conditions not met???
Thanks alot for your help :)
To get the indices, you could use np.argwhere:
In [3]: da= xr.DataArray([[1,-2,3],[4,5,-6]])
In [4]: da
Out[4]:
<xarray.DataArray (dim_0: 2, dim_1: 3)>
array([[ 1, -2, 3],
[ 4, 5, -6]])
Dimensions without coordinates: dim_0, dim_1
In [14]: da.where(da<0,0)
Out[14]:
<xarray.DataArray (dim_0: 2, dim_1: 3)>
array([[ 0, -2, 0],
[ 0, 0, -6]])
Dimensions without coordinates: dim_0, dim_1
# Note you'd need to handle the case of a 0 value here
In [13]: np.argwhere(da.where(da<0,0).values)
Out[13]:
array([[0, 1],
[1, 2]])
I agree this would be useful function to have natively in xarray; I'm not sure of the best way of doing it natively at the moment. Open to ideas!
For a given 2D tensor I want to retrieve all indices where the value is 1. I expected to be able to simply use torch.nonzero(a == 1).squeeze(), which would return tensor([1, 3, 2]). However, instead, torch.nonzero(a == 1) returns a 2D tensor (that's okay), with two values per row (that's not what I expected). The returned indices should then be used to index the second dimension (index 1) of a 3D tensor, again returning a 2D tensor.
import torch
a = torch.Tensor([[12, 1, 0, 0],
[4, 9, 21, 1],
[10, 2, 1, 0]])
b = torch.rand(3, 4, 8)
print('a_size', a.size())
# a_size torch.Size([3, 4])
print('b_size', b.size())
# b_size torch.Size([3, 4, 8])
idxs = torch.nonzero(a == 1)
print('idxs_size', idxs.size())
# idxs_size torch.Size([3, 2])
print(b.gather(1, idxs))
Evidently, this does not work, leading to aRunTimeError:
RuntimeError: invalid argument 4: Index tensor must have same
dimensions as input tensor at
C:\w\1\s\windows\pytorch\aten\src\TH/generic/THTensorEvenMoreMath.cpp:453
It seems that idxs is not what I expect it to be, nor can I use it the way I thought. idxs is
tensor([[0, 1],
[1, 3],
[2, 2]])
but reading through the documentation I don't understand why I also get back the row indices in the resulting tensor. Now, I know I can get the correct idxs by slicing idxs[:, 1] but then still, I cannot use those values as indices for the 3D tensor because the same error as before is raised. Is it possible to use the 1D tensor of indices to select items across a given dimension?
You could simply slice them and pass it as the indices as in:
In [193]: idxs = torch.nonzero(a == 1)
In [194]: c = b[idxs[:, 0], idxs[:, 1]]
In [195]: c
Out[195]:
tensor([[0.3411, 0.3944, 0.8108, 0.3986, 0.3917, 0.1176, 0.6252, 0.4885],
[0.5698, 0.3140, 0.6525, 0.7724, 0.3751, 0.3376, 0.5425, 0.1062],
[0.7780, 0.4572, 0.5645, 0.5759, 0.5957, 0.2750, 0.6429, 0.1029]])
Alternatively, an even simpler & my preferred approach would be to just use torch.where() and then directly index into the tensor b as in:
In [196]: b[torch.where(a == 1)]
Out[196]:
tensor([[0.3411, 0.3944, 0.8108, 0.3986, 0.3917, 0.1176, 0.6252, 0.4885],
[0.5698, 0.3140, 0.6525, 0.7724, 0.3751, 0.3376, 0.5425, 0.1062],
[0.7780, 0.4572, 0.5645, 0.5759, 0.5957, 0.2750, 0.6429, 0.1029]])
A bit more explanation about the above approach of using torch.where(): It works based on the concept of advanced indexing. That is, when we index into the tensor using a tuple of sequence objects such as tuple of tensors, tuple of lists, tuple of tuples etc.
# some input tensor
In [207]: a
Out[207]:
tensor([[12., 1., 0., 0.],
[ 4., 9., 21., 1.],
[10., 2., 1., 0.]])
For basic slicing, we would need a tuple of integer indices:
In [212]: a[(1, 2)]
Out[212]: tensor(21.)
To achieve the same using advanced indexing, we would need a tuple of sequence objects:
# adv. indexing using a tuple of lists
In [213]: a[([1,], [2,])]
Out[213]: tensor([21.])
# adv. indexing using a tuple of tuples
In [215]: a[((1,), (2,))]
Out[215]: tensor([21.])
# adv. indexing using a tuple of tensors
In [214]: a[(torch.tensor([1,]), torch.tensor([2,]))]
Out[214]: tensor([21.])
And the dimension of the returned tensor would always be one dimension less than the dimension of the input tensor.
Assuming that b's three dimensions are batch_size x sequence_length x features (b x s x feats), the expected results can be achieved as follows.
import torch
a = torch.Tensor([[12, 1, 0, 0],
[4, 9, 21, 1],
[10, 2, 1, 0]])
b = torch.rand(3, 4, 8)
print(b.size())
# b x s x feats
idxs = torch.nonzero(a == 1)[:, 1]
print(idxs.size())
# b
c = b[torch.arange(b.size(0)), idxs]
print(c.size())
# b x feats
import torch
a = torch.Tensor([[12, 1, 0, 0],
[4, 9, 21, 1],
[10, 2, 1, 0]])
b = torch.rand(3, 4, 8)
print('a_size', a.size())
# a_size torch.Size([3, 4])
print('b_size', b.size())
# b_size torch.Size([3, 4, 8])
#idxs = torch.nonzero(a == 1, as_tuple=True)
idxs = torch.nonzero(a == 1)
#print('idxs_size', idxs.size())
print(torch.index_select(b,1,idxs[:,1]))
As a supplementary of #kmario23's solution, you can still achieve the same results like
b[torch.nonzero(a==1,as_tuple=True)]
I am trying to create .mat data files using python. The matlab code expects the data to have a certain format, where two-dimensional ndarrays of non-uniform sizes are stored as objects in a column vector. So, in my case, there would be k numpy arrays of shape (m_i, n) - with different m_i for each array - stored in a numpy array with dtype=object of shape (k, 1). I then add this object array to a dictionary and pass it to scipy.io.savemat().
This works fine so long as the m_i are indeed different. If all k arrays happen to have the same number of rows m_i, the behaviour becomes strange. First of all, it requires very explicit assignment to a numpy array of dtype=object that has been initialised to the final size k, otherwise numpy simply creates a three-dimensional array. But even when I have the correct format in python and store it to a .mat file using savemat, there is some kind of problem in the translation to the matlab format.
When I reload the data from the .mat file using scipy.io.loadmat, I find that I still have an object array of shape (k, 1), which still has elements of shape (m, n). However, each element is no longer an int or a float but is instead a numpy array of shape (1, 1) that has to be further indexed to access the contained int or float. So an individual element of an object vector that was supposed to be a numpy array of shape (2, 4) would look something like this:
[array([[array([[0.82374894]]), array([[0.50730055]]),
array([[0.36721625]]), array([[0.45036349]])],
[array([[0.26119276]]), array([[0.16843872]]),
array([[0.28649524]]), array([[0.64239569]])]], dtype=object)]
This also poses a problem for the matlab code that I am trying to build my data files for. It runs fine for the arrays of objects that have different shapes but will break when there are arrays containing arrays of the same shape.
I know this is a rather obscure and possibly unavoidable issue but I figured I would see if anyone else has encountered it and found a fix. Thanks.
I'm not quite clear about the problem. Let me try to recreate your case:
In [58]: from scipy.io import loadmat, savemat
In [59]: A = np.empty((2,1), object)
In [61]: A[0,0]=np.arange(4).reshape(2,2)
In [62]: A[1,0]=np.arange(6).reshape(3,2)
In [63]: A
Out[63]:
array([[array([[0, 1],
[2, 3]])],
[array([[0, 1],
[2, 3],
[4, 5]])]], dtype=object)
In [64]: B=A[[0,0],:]
In [65]: B
Out[65]:
array([[array([[0, 1],
[2, 3]])],
[array([[0, 1],
[2, 3]])]], dtype=object)
As I explained earlier today, creating an object dtype array from arrays of matching size requires special handling. np.array(...) tries to create a higher dimensional array. https://stackoverflow.com/a/56243305/901925
Saving:
In [66]: savemat('foo.mat', {'A':A, 'B':B})
Loading:
In [74]: loadmat('foo.mat')
Out[74]:
{'__header__': b'MATLAB 5.0 MAT-file Platform: posix, Created on: Tue May 21 11:20:42 2019',
'__version__': '1.0',
'__globals__': [],
'A': array([[array([[0, 1],
[2, 3]])],
[array([[0, 1],
[2, 3],
[4, 5]])]], dtype=object),
'B': array([[array([[0, 1],
[2, 3]])],
[array([[0, 1],
[2, 3]])]], dtype=object)}
In [75]: _74['A'][1,0]
Out[75]:
array([[0, 1],
[2, 3],
[4, 5]])
Your problem case looks like it's a object dtype array containing numbers:
In [89]: C = np.arange(4).reshape(2,2).astype(object)
In [90]: C
Out[90]:
array([[0, 1],
[2, 3]], dtype=object)
In [91]: savemat('foo1.mat', {'C': C})
In [92]: loadmat('foo1.mat')
Out[92]:
{'__header__': b'MATLAB 5.0 MAT-file Platform: posix, Created on: Tue May 21 11:39:31 2019',
'__version__': '1.0',
'__globals__': [],
'C': array([[array([[0]]), array([[1]])],
[array([[2]]), array([[3]])]], dtype=object)}
Evidently savemat has converted the integer objects into 2d MATLAB compatible arrays. In MATLAB everything, even scalars, is at least 2d.
===
And in Octave, the object dtype arrays all produce cells, and the 2d numeric arrays produce matrices:
>> load foo.mat
>> A
A =
{
[1,1] =
0 1
2 3
[2,1] =
0 1
2 3
4 5
}
>> B
B =
{
[1,1] =
0 1
2 3
[2,1] =
0 1
2 3
}
>> load foo1.mat
>> C
C =
{
[1,1] = 0
[2,1] = 2
[1,2] = 1
[2,2] = 3
}
Python: Issue reading in str from MATLAB .mat file using h5py and NumPy
is a relatively recent SO that showed there's a difference between the Octave HDF5 and MATLAB.
I have a Pandas series and here are two first two rows:
X.head(2)
Which has 1D arrays for each row: the column header is mels_flatten
mels_flatten
0 [0.0171469795289, 0.0173154008662, 0.395695541...
1 [0.0471267533454, 0.0061760868171, 0.005647608...
I want to store the values in a single array to feed to a classifier model.
np.vstack(X.values)
or
np.array(X.values)
both returns following
array([[ array([ 1.71469795e-02, 1.73154009e-02, 3.95695542e-01, ...,
2.35955651e-04, 8.64118460e-04, 7.74663408e-04])],
[ array([ 0.04712675, 0.00617609, 0.00564761, ..., 0.00277199,
0.00205229, 0.00043118])],
I am not sure how to process array of array objects.
My expected result is:
array([[ 1.71469795e-02, 1.73154009e-02, 3.95695542e-01, ...,
2.35955651e-04, 8.64118460e-04, 7.74663408e-04]],
[ 0.04712675, 0.00617609, 0.00564761, ..., 0.00277199,
0.00205229, 0.00043118]],
Have tried np.concatenate and np.resize as some other posts suggested with no luck.
I find it likely that not all of your 1d arrays are the same length, i.e. your series is not compatible with a rectangular 2d array.
Consider the following dummy example:
import pandas as pd
import numpy as np
X = pd.Series([np.array([1,2,3]),np.array([4,5,6])])
# 0 [1, 2, 3]
# 1 [4, 5, 6]
# dtype: object
np.vstack(X.values)
# array([[1, 2, 3],
# [4, 5, 6]])
As the above demonstrate, a collection of 1d arrays (or lists) of the same size will be nicely stacked to a 2d array. Check the size of your arrays, and you'll probably find that there are some discrepancies:
>>> X.apply(len)
0 3
1 3
dtype: int64
If X.apply(len).unique() returns an array with more than 1 elements, you'll see the proof of the problem. In the above rectangular case:
>>> X.apply(len).unique()
array([3])
In a non-conforming example:
>>> Y = pd.Series([np.array([1,2,3]),np.array([4,5])])
>>> np.array(Y.values)
array([array([1, 2, 3]), array([4, 5])], dtype=object)
>>> Y.apply(len).unique()
array([3, 2])
As you can see, the nested array result is coupled to the non-unique length of items inside the original array.
I'm new to python and numpy in general. I read several tutorials and still so confused between the differences in dim, ranks, shape, aixes and dimensions. My mind seems to be stuck at the matrix representation. So if you say that A is a matrix that looks like this:
A =
1 2 3
4 5 6
then all I can think of is a 2x3 matrix (two rows and three columns). Here I understand that the shape is 2x3. But I really I am unable to go out side the thinking of a 2D matrices. I don't understand for example the dot() documentation when it says "For N dimensions it is a sum product over the last axis of a and the second-to-last of b". I'm so confused and unable to understand this. I don't understand like if V is a N:1 vector and M is N:N matrix, how dot(V,M) or dot(M,V) work and the difference between them.
Can anyone then please explain to me what is a N dimensional array, what's a shape, what's an axis and how does it relate to the documentation of the dot() function? It would be great if the explanation visualizes the ideas.
Dimensionality of NumPy arrays must be understood in the data structures sense, not the mathematical sense, i.e. it's the number of scalar indices you need to obtain a scalar value.(*)
E.g., this is a 3-d array:
>>> X = np.arange(24).reshape(2, 3, 4)
>>> X
array([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]],
[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]]])
Indexing once gives a 2-d array (matrix):
>>> X[0]
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
Indexing twice gives a 1-d array (vector), and indexing three times gives a scalar.
The rank of X is its number of dimensions:
>>> X.ndim
3
>>> np.rank(X)
3
Axis is roughly synonymous with dimension; it's used in broadcasting operations:
>>> X.sum(axis=0)
array([[12, 14, 16, 18],
[20, 22, 24, 26],
[28, 30, 32, 34]])
>>> X.sum(axis=1)
array([[12, 15, 18, 21],
[48, 51, 54, 57]])
>>> X.sum(axis=2)
array([[ 6, 22, 38],
[54, 70, 86]])
To be honest, I find this definition of "rank" confusing since it matches neither the name of the attribute ndim nor the linear algebra definition of rank.
Now regarding np.dot, what you have to understand is that there are three ways to represent a vector in NumPy: 1-d array, a column vector of shape (n, 1) or a row vector of shape (1, n). (Actually, there are more ways, e.g. as a (1, n, 1)-shaped array, but these are quite rare.) np.dot performs vector multiplication when both arguments are 1-d, matrix-vector multiplication when one argument is 1-d and the other is 2-d, and otherwise it performs a (generalized) matrix multiplication:
>>> A = np.random.randn(2, 3)
>>> v1d = np.random.randn(2)
>>> np.dot(v1d, A)
array([-0.29269547, -0.52215117, 0.478753 ])
>>> vrow = np.atleast_2d(v1d)
>>> np.dot(vrow, A)
array([[-0.29269547, -0.52215117, 0.478753 ]])
>>> vcol = vrow.T
>>> np.dot(vcol, A)
Traceback (most recent call last):
File "<ipython-input-36-98949c6de990>", line 1, in <module>
np.dot(vcol, A)
ValueError: matrices are not aligned
The rule "sum product over the last axis of a and the second-to-last of b" matches and generalizes the common definition of matrix multiplication.
(*) Arrays of dtype=object are a bit of an exception, since they treat any Python object as a scalar.
np.dot is a generalization of matrix multiplication.
In regular matrix multiplication, an (N,M)-shape matrix multiplied with a (M,P)-shaped matrix results in a (N,P)-shaped matrix. The resultant shape can be thought of as being formed by squashing the two shapes together ((N,M,M,P)) and then removing the middle numbers, M (to produce (N,P)). This is the property that np.dot preserves while generalizing to arrays of higher dimension.
When the docs say,
"For N dimensions it is a sum product over the last axis of a and the
second-to-last of b".
it is speaking to this point. An array of shape (u,v,M) dotted with an array of shape (w,x,y,M,z) would result in an array of shape (u,v,w,x,y,z).
Let's see how this rule looks when applied to
In [25]: V = np.arange(2); V
Out[25]: array([0, 1])
In [26]: M = np.arange(4).reshape(2,2); M
Out[26]:
array([[0, 1],
[2, 3]])
First, the easy part:
In [27]: np.dot(M, V)
Out[27]: array([1, 3])
There is no surprise here; this is just matrix-vector multiplication.
Now consider
In [28]: np.dot(V, M)
Out[28]: array([2, 3])
Look at the shape of V and M:
In [29]: V.shape
Out[29]: (2,)
In [30]: M.shape
Out[30]: (2, 2)
So np.dot(V,M) is like matrix multiplication of a (2,)-shaped matrix with a (2,2)-shaped matrix, which should result in a (2,)-shaped matrix.
The last (and only) axis of V and the second-to-last axis of M (aka the first axis of M) are multiplied and summed over, leaving only the last axis of M.
If you want to visualize this: np.dot(V, M) looks as though V has 1 row and 2 columns:
[[0, 1]] * [[0, 1],
[2, 3]]
and so, when V is multiplied by M, np.dot(V, M) equals
[[0*0 + 1*2], [2,
[0*1 + 1*3]] = 3]
However, I don't really recommend trying to visualize NumPy arrays this way -- at least I never do. I focus almost exclusively on the shape.
(2,) * (2,2)
\ /
\ /
(2,)
You just think about the "middle" axes being dotted, and disappearing from the resultant shape.
np.sum(arr, axis=0) tells NumPy to sum the elements in arr eliminating the 0th axis. If arr is 2-dimensional, the 0th axis are the rows. So for example, if arr looks like this:
In [1]: arr = np.arange(6).reshape(2,3); arr
Out[1]:
array([[0, 1, 2],
[3, 4, 5]])
then np.sum(arr, axis=0) will sum along the columns, thus eliminating the 0th axis (i.e. the rows).
In [2]: np.sum(arr, axis=0)
Out[2]: array([3, 5, 7])
The 3 is the result of 0+3, the 5 equals 1+4, the 7 equals 2+5.
Notice arr had shape (2,3), and after summing, the 0th axis is removed so the result is of shape (3,). The 0th axis had length 2, and each sum is composed of adding those 2 elements. The shape (2,3) "becomes" (3,). You can know the resultant shape in advance! This can help guide your thinking.
To test your understanding, consider np.sum(arr, axis=1). Now the 1-axis is removed. So the resultant shape will be (2,), and element in the result will be the sum of 3 values.
In [3]: np.sum(arr, axis=1)
Out[3]: array([ 3, 12])
The 3 equals 0+1+2, and the 12 equals 3+4+5.
So we see that summing an axis eliminates that axis from the result. This has bearing on np.dot, since the calculation performed by np.dot is a sum of products. Since np.dot performs a summing operation along certain axes, that axis is removed from the result. That is why applying np.dot to arrays of shape (2,) and (2,2) results in an array of shape (2,). The first 2 in both arrays is summed over, eliminating both, leaving only the second 2 in the second array.
In your case,
A is a 2D array, namely a matrix, with its shape being (2, 3). From docstring of numpy.matrix:
A matrix is a specialized 2-D array that retains its 2-D nature through operations.
numpy.rank return the number of dimensions of an array, which is quite different from the concept of rank in linear algebra, e.g. A is an array of dimension/rank 2.
np.dot(V, M), or V.dot(M) multiplies matrix V with M. Note that numpy.dot do the multiplication as far as possible. If V is N:1 and M is N:N, V.dot(M) would raise an ValueError.
e.g.:
In [125]: a
Out[125]:
array([[1],
[2]])
In [126]: b
Out[126]:
array([[2, 3],
[1, 2]])
In [127]: a.dot(b)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-127-9a1f5761fa9d> in <module>()
----> 1 a.dot(b)
ValueError: objects are not aligned
EDIT:
I don't understand the difference between Shape of (N,) and (N,1) and it relates to the dot() documentation.
V of shape (N,) implies an 1D array of length N, whilst shape (N, 1) implies a 2D array with N rows, 1 column:
In [2]: V = np.arange(2)
In [3]: V.shape
Out[3]: (2,)
In [4]: Q = V[:, np.newaxis]
In [5]: Q.shape
Out[5]: (2, 1)
In [6]: Q
Out[6]:
array([[0],
[1]])
As the docstring of np.dot says:
For 2-D arrays it is equivalent to matrix multiplication, and for 1-D
arrays to inner product of vectors (without complex conjugation).
It also performs vector-matrix multiplication if one of the parameters is a vector. Say V.shape==(2,); M.shape==(2,2):
In [17]: V
Out[17]: array([0, 1])
In [18]: M
Out[18]:
array([[2, 3],
[4, 5]])
In [19]: np.dot(V, M) #treats V as a 1*N 2D array
Out[19]: array([4, 5]) #note the result is a 1D array of shape (2,), not (1, 2)
In [20]: np.dot(M, V) #treats V as a N*1 2D array
Out[20]: array([3, 5]) #result is still a 1D array of shape (2,), not (2, 1)
In [21]: Q #a 2D array of shape (2, 1)
Out[21]:
array([[0],
[1]])
In [22]: np.dot(M, Q) #matrix multiplication
Out[22]:
array([[3], #gets a result of shape (2, 1)
[5]])