How can I turn a 1x3 matrix into a one dimensional array - python

I'm making a function to calculate a dot product when given two vectors. The code is later used in a matrix multiplication function. The issue I'm having is that the parameters passed in from the matrix multiplication function are 1x3 matrices that in order to multiply together, I need to use dot+=A[0,,i]*B[0,i]. The submission website expects dot+=A[i],B[i] and I'm not sure how to get my matrices converted to arrays. Picture of my two functions
I tried recreating the matrices and isolating rows but I still had [[1,2,3]] come up and mes with my dot product function.

I suggest you to use the numpy library to perform matrix multiplication.
You can use the numpy.matmul function to perform matrix multiplication. Read more about it here: https://numpy.org/doc/stable/reference/generated/numpy.matmul.html
You can also use the numpy.dot function to perform dot product. Read more about it here: https://numpy.org/doc/stable/reference/generated/numpy.dot.html
Here is an example of how to use the numpy library to perform matrix multiplication:
import numpy as np
a = np.array([[1, 2], [3, 4]])
b = np.array([[5, 6], [7, 8]])
c = np.matmul(a, b)

Glancing at your image, I noticed the use of np.matrix. Be careful with that. That's an array subclass designed years ago to make numpy more friendly to wayward MATLAB users. It's use is discouraged now.
A key difference is that it must always be 2d:
In [108]: M = np.matrix([1,2,3])
In [109]: M
Out[109]: matrix([[1, 2, 3]]) # note the added dimension
In [110]: M.shape
Out[110]: (1, 3)
And ravel doesn't remove that leading dimension:
In [111]: M.ravel()
Out[111]: matrix([[1, 2, 3]])
np.matrix does have a A1 property, which ravels and converts to ndarray, or rather converts to array and ravels:
In [112]: M.A1
Out[112]: array([1, 2, 3])
np.matrix does matrix multiplication with using *, though now # is recommended operator that works with ndarray as well
In [113]: M*M
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[113], line 1
----> 1 M*M
File ~\miniconda3\lib\site-packages\numpy\matrixlib\defmatrix.py:218, in matrix.__mul__(self, other)
215 def __mul__(self, other):
216 if isinstance(other, (N.ndarray, list, tuple)) :
217 # This promotes 1-D vectors to row vectors
--> 218 return N.dot(self, asmatrix(other))
219 if isscalar(other) or not hasattr(other, '__rmul__') :
220 return N.dot(self, other)
File <__array_function__ internals>:180, in dot(*args, **kwargs)
ValueError: shapes (1,3) and (1,3) not aligned: 3 (dim 1) != 1 (dim 0)
With a (1,3) shape, we have to use a transpose to do dot, producing (3,3) one way:
In [114]: M.T*M # (3,1) with (1,3) => (3,3)
Out[114]:
matrix([[1, 2, 3],
[2, 4, 6],
[3, 6, 9]])
or (1,1) the other:
In [115]: M*M.T # (1,3) with (3,1) => (1,1)
Out[115]: matrix([[14]])
where as using the 1d A1 gives a scalar, dot product:
In [116]: M.A1 # M.A1
Out[116]: 14
While you are welcome to implement your own dot (for learning purpose, I assume), be aware that numpy has good code to do this for you.
np.dot
np.matmul
np.einsum
Note that dot and matmul extend the usual 2d matrix multiplication, with special rules for 1d arrays, and for 3+d. But the key pairing is "last dim A pairs with the 2nd to the last of B", the usual "across columns, down rows" we learned as kids.

Related

How to slice a numpy array using index arrays with different shapes?

Let's say that we have the following 2d numpy array:
arr = np.array([[1,1,0,1,1],
[0,0,0,1,0],
[1,0,0,0,0],
[0,0,1,0,0],
[0,1,0,0,0]])
and the following indices for rows and columns:
rows = np.array([0,2,4])
cols = np.array([1,2])
The objective is to slice arr using rows and cols to take the following expected result:
arr_sliced = np.array([[1,0],
[0,0],
[1,0]])
Using directly the arrays as indices like arr[rows, cols] leads to:
IndexError: shape mismatch: indexing arrays could not be broadcast together with shapes (3,) (2,)
So what is the straightforward way to achieve this kind of slicing?
Update: useful information about the solution
So the solution was simple enough and it demands a basic comprehension about numpy's broadcasting. Someone could read these nice but not so representative examples from numpy. Also, the general broadcasting rules explains why there is no shape mismatch in:
arr[rows[:, np.newaxis], cols]
# rows[:, np.newaxis].shape == (3,1)
# cols.shape == (2,)
You can use:
arr[rows[:,None], cols[None]]
Output:
array([[1, 0],
[0, 0],
[1, 0]])
It looks like it is much quicker than indexing for large arrays.
arr[np.ix_([0,2,4],[1,2])]
array([[1, 0],
[0, 0],
[1, 0]])
document: https://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.ix_.html
This function takes N 1-D sequences and returns N outputs with N dimensions each, such that the shape is 1 in all but one dimension and the dimension with the non-unit shape value cycles through all N dimensions.

Python Numpy Transpose Matrix [duplicate]

I use Python and NumPy and have some problems with "transpose":
import numpy as np
a = np.array([5,4])
print(a)
print(a.T)
Invoking a.T is not transposing the array. If a is for example [[],[]] then it transposes correctly, but I need the transpose of [...,...,...].
It's working exactly as it's supposed to. The transpose of a 1D array is still a 1D array! (If you're used to matlab, it fundamentally doesn't have a concept of a 1D array. Matlab's "1D" arrays are 2D.)
If you want to turn your 1D vector into a 2D array and then transpose it, just slice it with np.newaxis (or None, they're the same, newaxis is just more readable).
import numpy as np
a = np.array([5,4])[np.newaxis]
print(a)
print(a.T)
Generally speaking though, you don't ever need to worry about this. Adding the extra dimension is usually not what you want, if you're just doing it out of habit. Numpy will automatically broadcast a 1D array when doing various calculations. There's usually no need to distinguish between a row vector and a column vector (neither of which are vectors. They're both 2D!) when you just want a vector.
Use two bracket pairs instead of one. This creates a 2D array, which can be transposed, unlike the 1D array you create if you use one bracket pair.
import numpy as np
a = np.array([[5, 4]])
a.T
More thorough example:
>>> a = [3,6,9]
>>> b = np.array(a)
>>> b.T
array([3, 6, 9]) #Here it didn't transpose because 'a' is 1 dimensional
>>> b = np.array([a])
>>> b.T
array([[3], #Here it did transpose because a is 2 dimensional
[6],
[9]])
Use numpy's shape method to see what is going on here:
>>> b = np.array([10,20,30])
>>> b.shape
(3,)
>>> b = np.array([[10,20,30]])
>>> b.shape
(1, 3)
For 1D arrays:
a = np.array([1, 2, 3, 4])
a = a.reshape((-1, 1)) # <--- THIS IS IT
print a
array([[1],
[2],
[3],
[4]])
Once you understand that -1 here means "as many rows as needed", I find this to be the most readable way of "transposing" an array. If your array is of higher dimensionality simply use a.T.
You can convert an existing vector into a matrix by wrapping it in an extra set of square brackets...
from numpy import *
v=array([5,4]) ## create a numpy vector
array([v]).T ## transpose a vector into a matrix
numpy also has a matrix class (see array vs. matrix)...
matrix(v).T ## transpose a vector into a matrix
numpy 1D array --> column/row matrix:
>>> a=np.array([1,2,4])
>>> a[:, None] # col
array([[1],
[2],
[4]])
>>> a[None, :] # row, or faster `a[None]`
array([[1, 2, 4]])
And as #joe-kington said, you can replace None with np.newaxis for readability.
To 'transpose' a 1d array to a 2d column, you can use numpy.vstack:
>>> numpy.vstack(numpy.array([1,2,3]))
array([[1],
[2],
[3]])
It also works for vanilla lists:
>>> numpy.vstack([1,2,3])
array([[1],
[2],
[3]])
instead use arr[:,None] to create column vector
You can only transpose a 2D array. You can use numpy.matrix to create a 2D array. This is three years late, but I am just adding to the possible set of solutions:
import numpy as np
m = np.matrix([2, 3])
m.T
Basically what the transpose function does is to swap the shape and strides of the array:
>>> a = np.ones((1,2,3))
>>> a.shape
(1, 2, 3)
>>> a.T.shape
(3, 2, 1)
>>> a.strides
(48, 24, 8)
>>> a.T.strides
(8, 24, 48)
In case of 1D numpy array (rank-1 array) the shape and strides are 1-element tuples and cannot be swapped, and the transpose of such an 1D array returns it unchanged. Instead, you can transpose a "row-vector" (numpy array of shape (1, n)) into a "column-vector" (numpy array of shape (n, 1)). To achieve this you have to first convert your 1D numpy array into row-vector and then swap the shape and strides (transpose it). Below is a function that does it:
from numpy.lib.stride_tricks import as_strided
def transpose(a):
a = np.atleast_2d(a)
return as_strided(a, shape=a.shape[::-1], strides=a.strides[::-1])
Example:
>>> a = np.arange(3)
>>> a
array([0, 1, 2])
>>> transpose(a)
array([[0],
[1],
[2]])
>>> a = np.arange(1, 7).reshape(2,3)
>>> a
array([[1, 2, 3],
[4, 5, 6]])
>>> transpose(a)
array([[1, 4],
[2, 5],
[3, 6]])
Of course you don't have to do it this way since you have a 1D array and you can directly reshape it into (n, 1) array by a.reshape((-1, 1)) or a[:, None]. I just wanted to demonstrate how transposing an array works.
Another solution.... :-)
import numpy as np
a = [1,2,4]
[1, 2, 4]
b = np.array([a]).T
array([[1],
[2],
[4]])
The name of the function in numpy is column_stack.
>>>a=np.array([5,4])
>>>np.column_stack(a)
array([[5, 4]])
I am just consolidating the above post, hope it will help others to save some time:
The below array has (2, )dimension, it's a 1-D array,
b_new = np.array([2j, 3j])
There are two ways to transpose a 1-D array:
slice it with "np.newaxis" or none.!
print(b_new[np.newaxis].T.shape)
print(b_new[None].T.shape)
other way of writing, the above without T operation.!
print(b_new[:, np.newaxis].shape)
print(b_new[:, None].shape)
Wrapping [ ] or using np.matrix, means adding a new dimension.!
print(np.array([b_new]).T.shape)
print(np.matrix(b_new).T.shape)
There is a method not described in the answers but described in the documentation for the numpy.ndarray.transpose method:
For a 1-D array this has no effect, as a transposed vector is simply the same vector. To convert a 1-D array into a 2D column vector, an additional dimension must be added. np.atleast2d(a).T achieves this, as does a[:, np.newaxis].
One can do:
import numpy as np
a = np.array([5,4])
print(a)
print(np.atleast_2d(a).T)
Which (imo) is nicer than using newaxis.
As some of the comments above mentioned, the transpose of 1D arrays are 1D arrays, so one way to transpose a 1D array would be to convert the array to a matrix like so:
np.transpose(a.reshape(len(a), 1))
To transpose a 1-D array (flat array) as you have in your example, you can use the np.expand_dims() function:
>>> a = np.expand_dims(np.array([5, 4]), axis=1)
array([[5],
[4]])
np.expand_dims() will add a dimension to the chosen axis. In this case, we use axis=1, which adds a column dimension, effectively transposing your original flat array.

understand numpy.dot with N-d array and 1-d array [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
To who voted to close because of unclear what I'm asking, here are the questions in my post:
Can anyone tell me what's the result of y?
Is there anything called sum product in Mathematics?
Is x subject to broadcasting?
Why is y a column/row vector?
What if x=np.array([[7],[2],[3]])?
w=np.array([[1,2,3],[4,5,6],[7,8,9]])
x=np.array([7,2,3])
y=np.dot(w,x)
Can anyone tell me what's the result of y?
I deliberately Mosaic the screenshot so that you pretend you are in a test and cannot run python to get the result.
https://docs.scipy.org/doc/numpy-1.15.4/reference/generated/numpy.dot.html#numpy.dot says
If a is an N-D array and b is a 1-D array, it is a sum product over
the last axis of a and b.
Is there anything called sum product in Mathematics?
Is x subject to broadcasting?
Why is y a column/row vector?
What if x=np.array([[7],[2],[3]])?
np.dot is nothing but matrix multiplication if the dimensions match for multiplication (i.e. w is 3x3 and x is 1x3, so matrix multiplication of WX cannot be made but XW is okay). In first case:
>>> w=np.array([[1,2,3],[4,5,6],[7,8,9]])
>>> x=np.array([7,2,3])
>>> w.shape
(3, 3)
>>> x.shape # 1d vector
(3, )
So in this case it returns the inner product of each row of W with X:
>>> [np.dot(ww,x) for ww in w]
[20, 56, 92]
>>> np.dot(w,x)
array([20, 56, 92]) # as they are both same
change the order
>>> y = np.dot(x,w) # matrix mult as usual
>>> y
array([36, 48, 60])
In second case:
>>> x=np.array([[7],[2],[3]])
>>> x.shape
(3, 1)
>>> y = np.dot(w,x) # matrix mult
>>> y
array([[20],
[56],
[92]])
However, this time dimensions does not match for both multiplication (3x1,3x3) and inner product (1x1,1x3) so it raises error.
>>> y = np.dot(x,w)
Traceback (most recent call last):
File "<ipython-input-110-dcddcf3bedd8>", line 1, in <module>
y = np.dot(x,w)
ValueError: shapes (3,1) and (3,3) not aligned: 1 (dim 1) != 3 (dim 0)
I don't think your question is unclear, but rather overly pedantic.
For example, why are you puzzled by sum product in this nD by 1d case, when the docs use inner product for the 1d by 1d case, and matrix product in the 2d by 2d case? Give yourself some freedom to read it as sum of the products, as done in the inner product.
To make your example clearer, make w rectangular, to better distinguish row actions from column ones:
In [168]: w=np.array([[1,2,3],[4,5,6]])
...: x=np.array([7,2,3])
...:
...:
In [169]: w.shape
Out[169]: (2, 3)
In [170]: x.shape
Out[170]: (3,)
The dot and its equivalent einstein notation:
In [171]: np.dot(w,x)
Out[171]: array([20, 56])
In [172]: np.einsum('ij,j->i',w,x)
Out[172]: array([20, 56])
The sum of the products is being done on the repeated j dimension, without summation on i.
We can do the same thing with broadcasted elementwise multiplication:
In [173]: (w*x[None,:]).sum(axis=1)
Out[173]: array([20, 56])
While this equivalent operation does use broadcasting, it's better not to think of dot in those terms.
matmul gives another description of the same action, adding a dimension to x to form a 2d by 2d matrix product, followed by a squeeze to remove the extra dimension. I don't think dot does that under the covers, but the result is the same.
This may also be called matrix vector multiplication, provided you don't insist on calling the 1d x a row vector or column vector.
Now for a 2d x, with shape (3,1):
In [175]: x2 = x[:,None]
In [176]: x2
Out[176]:
array([[7],
[2],
[3]])
In [177]: x2.shape
Out[177]: (3, 1)
In [178]: np.dot(w,x2)
Out[178]:
array([[20],
[56]])
In [179]: np.einsum('ij,jk->ik',w,x2)
Out[179]:
array([[20],
[56]])
The sum is over j, the last axis of w, and 2nd to the last of x. To do the same with elementwise we have to use broadcasting to generate a 3d outer product, and then do the sum to reduce the dimension back to 2.
In [180]: (w[:,:,None]*x2[None,:,:]).sum(axis=1)
Out[180]:
array([[20],
[56]])
In this example a (2,3) dot (3,1) => (2,1). That's perfectly normal matrix product behavior. In the first (2,3) dot (3,) => (2,). To me this is a logical generalization. (3,) dot (3,) => scalar (as opposed to ()` is a bit more of a special case.
I suspect the first case is mainly a problem for people who see a (3,) shape and think (1,3), a row-vector. (2,3) dot (1,3) doesn't work, because of the mismatch between the 3 and the 1.

Multiply an array (k,) or (k, n) by 1D array (k,)

I sometimes work with 1D arrays:
A = np.array([1, 2, 3, 4])
or 2D arrays (mono or stereo signals read with scipy.io.wavfile):
A = np.array([[1, 2], [3, 4], [5, 6], [7,8]])
Without having to distinguish these 2 cases with an if A.ndim == 2:..., is there a simple one-line solution to multiply this array A by an 1D array B = np.linspace(0., 1., 4)?
If A is 1D, then it's just A * B, and if A is 2D, what I mean is multiply each line of A by each element of B.
Note: this question arises naturally when working with both mono and stereo sounds read with scipy.io.wavfile.
Approach #1
We can use einsum to cover generic ndarrays -
np.einsum('i...,i->i...',A,B)
The trick that works here is the ellipsis that broadcasts for the trailing dimensions after the first axis as they are in A and keeps them in the output, while keeping the first axes for the two inputs aligned, which is the intended multiplication here. With A as 1D array, there's no broadcasting and that essentially reduces to : np.einsum('i,i->i',A,B) under the hoods.
Schematically put :
A : i x ....
B : i
out : i x ....
Hence, covers for A with any number of dimensions.
More info on the use of ellipsis from the docs :
To enable and control broadcasting, use an ellipsis. Default
NumPy-style broadcasting is done by adding an ellipsis to the left of
each term, like np.einsum('...ii->...i', a). To take the trace along
the first and last axes, you can do np.einsum('i...i', a), or to do a
matrix-matrix product with the left-most indices instead of rightmost,
you can do np.einsum('ij...,jk...->ik...', a, b).
Approach #2
Using the fact that we are trying to align the first axis of A with the only axis of 1D array B, we can simply transpose A, multiply with B and finally transpose back -
(A.T*B).T

Differences between array class and matrix class in numpy for matrix operation

I was trying to do matrix dot product and transpose with Numpy, and I found array can do many things matrix can do, such as dot product, point wise product, and transpose.
When I have to create a matrix, I have to create an array first.
example:
import numpy as np
array = np.ones([3,1])
matrix = np.matrix(array)
Since I can do matrix transpose and dot product in array type, I don't have to convert array into matrix to do matrix operations.
For example, the following line is valid, where A is an ndarray :
dot_product = np.dot(A.T, A )
The previous matrix operation can be expressed with matrix class variable A
dot_product = A.T * A
The operator * is exactly the same as point-wise product for ndarray. Therefore, it makes ndarray and matrix almost indistinguishable and causes confusions.
The confusion is a serious problem, as said in REP465
Writing code using numpy.matrix also works fine. But trouble begins as
soon as we try to integrate these two pieces of code together. Code
that expects an ndarray and gets a matrix, or vice-versa, may crash or
return incorrect results. Keeping track of which functions expect
which types as inputs, and return which types as outputs, and then
converting back and forth all the time, is incredibly cumbersome and
impossible to get right at any scale.
It will be very tempting if we stick to ndarray and deprecate matrix and support ndarray with matrix operation methods such as .inverse(), .hermitian(), outerproduct(), etc, in the future.
The major reason I still have to use matrix class is that it handles 1d array as 2d array, so I can transpose it.
It is very inconvenient so far how I transpose 1d array, since 1d array of size n has shape (n,) instead of (1,n). For example, if I have to do the inner product of two arrays :
A = [[1,1,1],[2,2,2].[3,3,3]]
B = [[1,2,3],[1,2,3],[1,2,3]]
np.dot(A,B) works fine, but if
B = [1,1,1]
,its transpose is still a row vector.
I have to handle this exception when the dimensions of input variable is unknown.
I hope this help some people with the same trouble, and hope to know if there is any better way to handle matrix operation like in Matlab, especially with 1d array. Thanks.
Your first example is a column vector:
In [258]: x = np.arange(3).reshape(3,1)
In [259]: x
Out[259]:
array([[0],
[1],
[2]])
In [260]: xm = np.matrix(x)
dot produces the inner product, and dimensions operate as: (1,2),(2,1)=>(1,1)
In [261]: np.dot(x.T, x)
Out[261]: array([[5]])
the matrix product does the same thing:
In [262]: xm.T * xm
Out[262]: matrix([[5]])
(The same thing with 1d arrays produces a scalar value, np.dot([0,1,2],[0,1,2]) # 5)
element multiplication of the arrays produces the outer product (so does np.outer(x, x) and np.dot(x,x.T))
In [263]: x.T * x
Out[263]:
array([[0, 0, 0],
[0, 1, 2],
[0, 2, 4]])
For ndarray, * IS element wise multiplication (the .* of MATLAB, but with broadcasting added). For element multiplication of matrix use np.multiply(xm,xm). (scipy sparse matrices have a multiply method, X.multiply(other))
You quote from the PEP that added the # operator (matmul). This, as well as np.tensordot and np.einsum can handle larger dimensional arrays, and other mixes of products. Those don't make sense with np.matrix since that's restricted to 2d.
With your 3x3 A and B
In [273]: np.dot(A,B)
Out[273]:
array([[ 3, 6, 9],
[ 6, 12, 18],
[ 9, 18, 27]])
In [274]: C=np.array([1,1,1])
In [281]: np.dot(A,np.array([1,1,1]))
Out[281]: array([3, 6, 9])
Effectively this sums each row. np.dot(A,np.array([1,1,1])[:,None]) does the same thing, but returns a (3,1) array.
np.matrix was created years ago to make numpy (actually one of its predecessors) feel more like MATLAB. A key feature is that it is restricted to 2d. That's what MATLAB was like back in the 1990s. np.matrix and MATLAB don't have 1d arrays; instead they have single column or single row matrices.
If the fact that ndarrays can be 1d (or even 0d) is a problem there are many ways of adding that 2nd dimension. I prefer the [None,:] kind of syntax, but reshape is also useful. ndmin=2, np.atleast_2d, np.expand_dims also work.
np.sum and other operations that reduced dimensions have a keepdims=True parameter to counter that. The new # gives an operator syntax for matrix multiplication. As far as I know, np.matrix class does not have any compiled code of its own.
============
The method that implements * for np.matrix uses np.dot:
def __mul__(self, other):
if isinstance(other, (N.ndarray, list, tuple)) :
# This promotes 1-D vectors to row vectors
return N.dot(self, asmatrix(other))
if isscalar(other) or not hasattr(other, '__rmul__') :
return N.dot(self, other)
return NotImplemented

Categories

Resources