I am so confused about Numpy array. Let's say I have two Numpy arrays.
a = np.array([[1,2], [3,4], [5,6]])
b = np.array([[1,10], [1, 10]])
My interpretations of a and b are 3x2 and 2x2 matrices, i.e,
a = 1 2 b = 1 10
3 4 1 10
5 6
Then, I thought it should be fine to do a * b since it is a multiplication of 3x2 and 2x2 matrices. However, it was not possible and I had to use a.dot(b).
Given this fact, I think my intepretation of Numpy array is not right. Can anyone let me know how I should think of Numpy array? I know that I can do a*b if I convert a and b into np.matrix. However, looking at other's code, it seems that people are just fine to use Numpy array as matrix, so I wonder how I should understand Numpy array in terms of matrix.
For numpy arrays, the * operator is used for element by element multiplication of arrays. This is only well defined if both arrays have the same dimensions. To illuminate *-multiplication, note that element by element multiplication with the identity matrix will not return the same matrix
>>> I = np.array([[1,0],[0,1]])
>>> B = np.array([[1,2],[3,4]])
>>> I*B
array([[ 1, 0],
[ 0, 4]])
Using the numpy function dot(a,b) produces the typical matrix multiplication.
>>> dot(I,B)
array([[ 1, 2],
[ 3, 4]])
np.dot is probably what you're looking for?
a = np.array([[1,2], [3,4], [5,6]])
b = np.array([[1,10], [1, 10]])
np.dot(a,b)
Out[6]:
array([[ 3, 30],
[ 7, 70],
[ 11, 110]])
Related
I'm learning numpy from a YouTube tutorial. In a video he demonstrated that
wine_data_arr[:, 0].shape
where wine_data_arr are a two dimensional numpy array imported from sklearn. And the result is (178,), and he said "it is a one dimensional array". But in math for example this
[1,2,3]
can represent a 1 by 3 matrix, which has dimension 2. So my question is why wine_data_arr[:, 0] is a one dimension array? I guess this definition must be useful in some situation. So what's that situation?
Try to be more specific: when writing wine_data_arr[:, 0] I provide two arguments, i.e. : and 0 and the result is one dimension. When I write wine_data_arr[:, (0,4)], I still provide two arguments : and (0,4), a tuple, and the result is two dimension. Why not both produce two dimension matrix?
Even if they "look" the same, a vector is not the same as a matrix. Consider:
>>> np.array([1,2,3,4])
array([1, 2, 3,4])
>>> np.matrix([1,2,3,4])
matrix([[1, 2, 3,4]])
>>> np.matrix([[1,2],[3,4]])
matrix([[1, 2],
[3, 4]])
When slicing a two-dimensional array like
>>> wine_data_arr = np.array([[1,2,3], [4,5,6], [7,8,9]])
>>> wine_data_arr
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
you can request a lower-dimensional component (a single row or column) using an integer index
>>> wine_data_arr[:,0]
array([1, 4, 7])
>>> wine_data_arr[0,:]
array([1, 2, 3])
or a same-dimensional "piece" using a slice index:
>>> wine_data_arr[:, 0:1]
array([[1],
[4],
[7]])
If you use two integer indices, you get a single zero-dimensional element of the array:
>>> wine_data_arr[0,0]
1
In numpy arrays can have 0, 1, 2 or more dimensions. In contrast to MATLAB there isn't a 2d lower boundary. Also numpy is generally consistent with Python lists, in display and indexing. MATLAB generally follows linear algebra conventions, but I'm sure there are other math definitions for arrays and vectors. In physics vector represents a point in space, or a direction, not a 2d 'column vector' or 'row vector'.
A list:
In [159]: alist = [1, 2, 3]
In [160]: len(alist)
Out[160]: 3
An array made from this list:
In [161]: arr = np.array(alist)
In [162]: arr.shape
Out[162]: (3,)
Indexing a list removes a level of nesting. Scalar indexing an array removes a dimension. See
https://numpy.org/doc/stable/user/basics.indexing.html
In [163]: alist[0]
Out[163]: 1
In [164]: arr[0]
Out[164]: 1
A 2d array:
In [166]: marr = np.arange(4).reshape(2, 2)
In [167]: marr
Out[167]:
array([[0, 1],
[2, 3]])
Again, scalar indexing removes a dimension:
In [169]: marr[0,:]
Out[169]: array([0, 1])
In [170]: marr[:, 0]
Out[170]: array([0, 2])
In [172]: marr[1, 1]
Out[172]: 3
Indexing with a list or slice preserves the dimension:
In [173]: marr[0, [1]]
Out[173]: array([1])
In [174]: marr[0, 0:1]
Out[174]: array([0])
Count the [] to determine the dimensionality.
The short answer: this is a convention.
Before I go into further details, let me clarify that the "dimensionality" in NumPy, for example, is not the same as that in math. In math, [1, 2, 3] is a three-dimensional vector, or a one by three dimensional matrix if you want. However, here, the dimensionality really means the "physical" dimension of the array, i.e., how many axes are present in your array (or matrix, or tensor, etc.).
Now let me get back to your question of why "this" particular definition of array dimension is helpful. What I'm going to say next is somewhat philosophical and is my own take of it. Essentially, it all boils down to communication between programmers. For example, when you are reading the documentation of some Python code and wondering the dimensionality of the output array, sure the documentation can write "N x M x ..." and then carefully define what N, M, etc. are. But in many cases, just the number of axes (or the "dimensionality" referred to in NumPy) may be sufficient to inform you. In this case, the documentation becomes much cleaner and easier to read while providing enough information about the expected outcome.
Given 2 lists of arrays (or 2 3D arrays) is there a smarter way in numpy, besides a loop, to get the multiplication of the first array of the first list times the first array of the second list and so on? I have a feeling I am overlooking the obvious. This is my current implementation:
import numpy as np
r = []
for i in range(np.shape(rz)[2]):
r.append(ry[..., i] # rz[..., i])
r = np.array(r)
Assuming that the last dimension is the same, numpy.einsum should do the trick:
import numpy as np
np.einsum('ijk,jmk-> imk', ry, rz)
import numpy as np
A = np.array([[3, 6, 7], [5, -3, 0]])
B = np.array([[1, 1], [2, 1], [3, -3]])
C = A.dot(B)
print(C)
Output:
[[ 36 -12] [ -1 2]]
I have two numpy arrays:
a = np.array([1, 2, 3]).reshape(3, 1)
b = np.array([4, 5]).reshape(2,1)
When I use a*b.T, I am thinking a wrong output because there is a difference in their shapes (using * performs element-wise multiplication for an array).
But the result returns Matrix multiplication, like this:
[[ 4, 5],
[ 8, 10],
[12, 15]]
# this shape is (3, 2)
Why does it work like this?
Your a * b.T is element multiplication, and works because of broadcasting. Addition, and many other binary operations work with this pair of shapes.
a is (3,1). b.T is (1,2). Broadcasting combines (3,1) with (1,2) to produce (3,2). The size 1 dimension is adjusted to match the other non-zero dimension.
Unless you make arrays with np.matrix, * does not perform mathematical matrix multiplication. np.dot is used to perform that (# and np.einsum also do this).
With this particular combination of shapes, the dot product is the same. np.outer(a,b) also produces this, the mathematical outer product. np.dot matches the last dimension of a with the 2nd to the last dimension of b.T. In this case they are both 1. dot is more interesting when the shared dimension has multiple items, producing the familiar sum of products.
In [5]: np.dot(a, b.T)
Out[5]:
array([[ 4, 5],
[ 8, 10],
[12, 15]])
'outer' addition:
In [3]: a + b.T
Out[3]:
array([[5, 6],
[6, 7],
[7, 8]])
It may help to look at a and b like this:
In [7]: a
Out[7]:
array([[1],
[2],
[3]])
In [8]: b
Out[8]:
array([[4],
[5]])
In [9]: b.T
Out[9]: array([[4, 5]])
I generally don't use matrix to talk about numpy arrays unless they are created with np.matrix, or more frequently scipy.sparse. numpy arrays can be 0d, 1d, 2d and higher. I pay more attention to the shape than the names.
Supposing I have 2d and 1d numpy array. I want to add the second array to each subarray of the first one and to get a new 2d array as the result.
>>> import numpy as np
>>> a = np.array([[1, 2], [3, 4], [5, 6], [7, 8]])
>>> b = np.array([2, 3])
>>> c = ... # <-- What should be here?
>>> c
array([[3, 5],
[5, 7],
[7, 9],
[9, 22]])
I could use a loop but I think there're standard ways to do it within numpy.
What is the best and quickest way to do it? Performance matters.
Thanks.
I think the comments are missing the explanation of why a+b works. It's called broadcasting
Basically if you have a NxM matrix and a Nx1 vector, you can directly use the + operator to "add the vector to each row of the matrix.
This also works if you have a 1xM vector and want to add it columnwise.
Broadcasting also works with other operators and other Matrix dimensions.
Take a look at the documentation to fully understand broadcasting
How to convert (5,) numpy array to (5,1)?
And how to convert backwards from (5,1) to (5,)?
What is the purpose of (5,) array, why is one dimension omitted? I mean why we didn't always use (5,1) form?
Does this happen only with 1D and 2D arrays or does it happen across 3D arrays, like can (2,3,) array exist?
UPDATE:
I managed to convert from (5,) to (5,1) by
a= np.reshape(a, (a.shape[0], 1))
but suggested variant looks simpler:
a = a[:, None] or a = a[:, np.newaxis]
To convert from (5,1) to (5,) np.ravel can be used
a= np.ravel(a)
A numpy array with shape (5,) is a 1 dimensional array while one with shape (5,1) is a 2 dimensional array. The difference is subtle, but can alter some computations in a major way. One has to be specially careful since these changes can be bull-dozes over by operations which flatten all dimensions, like np.mean or np.sum.
In addition to #m-massias's answer, consider the following as an example:
17:00:25 [2]: import numpy as np
17:00:31 [3]: a = np.array([1,2])
17:00:34 [4]: b = np.array([[1,2], [3,4]])
17:00:45 [6]: b * a
Out[6]:
array([[1, 4],
[3, 8]])
17:00:50 [7]: b * a[:,None] # Different result!
Out[7]:
array([[1, 2],
[6, 8]])
a has shape (2,) and it is broadcast over the second dimension. So the result you get is that each row (the first dimension) is multiplied by the vector:
17:02:44 [10]: b * np.array([[1, 2], [1, 2]])
Out[10]:
array([[1, 4],
[3, 8]])
On the other hand, a[:,None] has the shape (2,1) and so the orientation of the vector is known to be a column. Hence, the result you get is from the following operation (where each column is multiplied by a):
17:03:39 [11]: b * np.array([[1, 1], [2, 2]])
Out[11]:
array([[1, 2],
[6, 8]])
I hope that sheds some light on how the two arrays will behave differently.
You can add a new axis to an array a by doing a = a[:, None] or a = a[:, np.newaxis]
As far as "one dimension omitted", I don't really understand your question, because it has no end : the array could be (5, 1, 1), etc.
Use reshape() function
e.g.
open python terminal and type following:
>>> import numpy as np
>>> a = np.random.random(5)
>>> a
array([0.85694461, 0.37774476, 0.56348081, 0.02972139, 0.23453958])
>>> a.shape
(5,)
>>> b = a.reshape(5, 1)
>>> b.shape
(5, 1)