Numpy matrix multiplication of 2d matrix to give 3d matrix - python

I have two numpy arrays, like
A: = array([[0, 1],
[2, 3],
[4, 5]])
B = array([[ 6, 7],
[ 8, 9],
[10, 11]])
For each row of A and B, say Ra and Rb respectively, I want to calculate transpose(Ra)*Rb. So for given value of A and B, i want following answer:
array([[[ 0, 0],
[ 6, 7]],
[[ 16, 18],
[ 24, 27]],
[[ 40, 44],
[ 50, 55]]])
I have written the following code to do so:
x = np.outer(np.transpose(A[0]), B[0])
for i in range(1,len(A)):
x = np.append(x,np.outer(np.transpose(A[i]), B[i]),axis=0)
Is there any better way to do this task.

You can use extend dimensions of A and B with np.newaxis/None to bring in broadcasting for a vectorized solution like so -
A[...,None]*B[:,None,:]
Explanation : np.outer(np.transpose(A[i]), B[i]) basically does elementwise multiplications between a columnar version of A[i] and B[i]. You are repeating this for all rows in A against corresoinding rows in B. Please note that the np.transpose() doesn't seem to make any impact as np.outer takes care of the intended elementwise multiplications.
I would describe these steps in a vectorized language and thus implement, like so -
Extend dimensions of A and B to form 3D shapes for both of them such that we keep axis=0 aligned and keep as axis=0 in both of those extended versions too. Thus, we are left with deciding the last two axes.
To bring in the elementwise multiplications, push axis=1 of A in its original 2D version to axis=1 in its 3D version, thus creating a singleton dimension at axis=2 for extended version of A.
This last singleton dimension of 3D version of A has to align with the elements from axis=1 in original 2D version of B to let broadcasting happen. Thus, extended version of B would have the elements from axis=1 in its 2D version being pushed to axis=2 in its 3D version, thereby creating a singleton dimension for axis=1.
Finally, the extended versions would be : A[...,None] & B[:,None,:], multiplying whom would give us the desired output.

Related

Is there any way to vectorize a rolling cross-correlation in python based on my example?

Let's suppose I have two arrays that represent pixels in pictures.
I want to build an array of tensordot products of pixels of a smaller picture with a bigger picture as it "scans" the latter. By "scanning" I mean iteration over rows and columns while creating overlays with the original picture.
For instance, a 2x2 picture can be overlaid on top of 3x3 in four different ways, so I want to produce a four-element array that contains tensordot products of matching pixels.
Tensordot is calculated by multiplying a[i,j] with b[i,j] element-wise and summing the terms.
Please examine this code:
import numpy as np
a = np.array([[0,1,2],
[3,4,5],
[6,7,8]])
b = np.array([[0,1],
[2,3]])
shape_diff = (a.shape[0] - b.shape[0] + 1,
a.shape[1] - b.shape[1] + 1)
def compute_pixel(x,y):
sub_matrix = a[x : x + b.shape[0],
y : y + b.shape[1]]
return np.tensordot(sub_matrix, b, axes=2)
def process():
arr = np.zeros(shape_diff)
for i in range(shape_diff[0]):
for j in range(shape_diff[1]):
arr[i,j]=compute_pixel(i,j)
return arr
print(process())
Computing a single pixel is very easy, all I need is the starting location coordinates within a. From there I match the size of the b and do a tensordot product.
However, because I need to do this all over again for each x and y location as I'm iterating over rows and columns I've had to use a loop, which is of course suboptimal.
In the next piece of code I have tried to utilize a handy feature of tensordot, which also accepts tensors as arguments. In order words I can feed an array of arrays for different combinations of a, while keeping the b the same.
Although in order to create an array of said combination, I couldn't think of anything better than using another loop, which kind of sounds silly in this case.
def try_vector():
tensor = np.zeros(shape_diff + b.shape)
for i in range(shape_diff[0]):
for j in range(shape_diff[1]):
tensor[i,j]=a[i: i + b.shape[0],
j: j + b.shape[1]]
return np.tensordot(tensor, b, axes=2)
print(try_vector())
Note: tensor size is the sum of two tuples, which in this case gives (2, 2, 2, 2)
Yet regardless, even if I produced such array, it would be prohibitively large in size to be of any practical use. For doing this for a 1000x1000 picture, could probably consume all the available memory.
So, is there any other ways to avoid loops in this problem?
In [111]: process()
Out[111]:
array([[19., 25.],
[37., 43.]])
tensordot with 2 is the same as element multiply and sum:
In [116]: np.tensordot(a[0:2,0:2],b, axes=2)
Out[116]: array(19)
In [126]: (a[0:2,0:2]*b).sum()
Out[126]: 19
A lower-memory way of generating your tensor is:
In [121]: np.lib.stride_tricks.sliding_window_view(a,(2,2))
Out[121]:
array([[[[0, 1],
[3, 4]],
[[1, 2],
[4, 5]]],
[[[3, 4],
[6, 7]],
[[4, 5],
[7, 8]]]])
We can do a broadcasted multiply, and sum on the last 2 axes:
In [129]: (Out[121]*b).sum((2,3))
Out[129]:
array([[19, 25],
[37, 43]])

Transform 3D array into a 2D matrix with NumPy [duplicate]

What are the advantages and disadvantages of each?
From what I've seen, either one can work as a replacement for the other if need be, so should I bother using both or should I stick to just one of them?
Will the style of the program influence my choice? I am doing some machine learning using numpy, so there are indeed lots of matrices, but also lots of vectors (arrays).
Numpy matrices are strictly 2-dimensional, while numpy arrays (ndarrays) are
N-dimensional. Matrix objects are a subclass of ndarray, so they inherit all
the attributes and methods of ndarrays.
The main advantage of numpy matrices is that they provide a convenient notation
for matrix multiplication: if a and b are matrices, then a*b is their matrix
product.
import numpy as np
a = np.mat('4 3; 2 1')
b = np.mat('1 2; 3 4')
print(a)
# [[4 3]
# [2 1]]
print(b)
# [[1 2]
# [3 4]]
print(a*b)
# [[13 20]
# [ 5 8]]
On the other hand, as of Python 3.5, NumPy supports infix matrix multiplication using the # operator, so you can achieve the same convenience of matrix multiplication with ndarrays in Python >= 3.5.
import numpy as np
a = np.array([[4, 3], [2, 1]])
b = np.array([[1, 2], [3, 4]])
print(a#b)
# [[13 20]
# [ 5 8]]
Both matrix objects and ndarrays have .T to return the transpose, but matrix
objects also have .H for the conjugate transpose, and .I for the inverse.
In contrast, numpy arrays consistently abide by the rule that operations are
applied element-wise (except for the new # operator). Thus, if a and b are numpy arrays, then a*b is the array
formed by multiplying the components element-wise:
c = np.array([[4, 3], [2, 1]])
d = np.array([[1, 2], [3, 4]])
print(c*d)
# [[4 6]
# [6 4]]
To obtain the result of matrix multiplication, you use np.dot (or # in Python >= 3.5, as shown above):
print(np.dot(c,d))
# [[13 20]
# [ 5 8]]
The ** operator also behaves differently:
print(a**2)
# [[22 15]
# [10 7]]
print(c**2)
# [[16 9]
# [ 4 1]]
Since a is a matrix, a**2 returns the matrix product a*a.
Since c is an ndarray, c**2 returns an ndarray with each component squared
element-wise.
There are other technical differences between matrix objects and ndarrays
(having to do with np.ravel, item selection and sequence behavior).
The main advantage of numpy arrays is that they are more general than
2-dimensional matrices. What happens when you want a 3-dimensional array? Then
you have to use an ndarray, not a matrix object. Thus, learning to use matrix
objects is more work -- you have to learn matrix object operations, and
ndarray operations.
Writing a program that mixes both matrices and arrays makes your life difficult
because you have to keep track of what type of object your variables are, lest
multiplication return something you don't expect.
In contrast, if you stick solely with ndarrays, then you can do everything
matrix objects can do, and more, except with slightly different
functions/notation.
If you are willing to give up the visual appeal of NumPy matrix product
notation (which can be achieved almost as elegantly with ndarrays in Python >= 3.5), then I think NumPy arrays are definitely the way to go.
PS. Of course, you really don't have to choose one at the expense of the other,
since np.asmatrix and np.asarray allow you to convert one to the other (as
long as the array is 2-dimensional).
There is a synopsis of the differences between NumPy arrays vs NumPy matrixes here.
Scipy.org recommends that you use arrays:
*'array' or 'matrix'? Which should I use? - Short answer
Use arrays.
They support multidimensional array algebra that is supported in
MATLAB
They are the standard vector/matrix/tensor type of NumPy. Many
NumPy functions return arrays, not matrices.
There is a clear
distinction between element-wise operations and linear algebra
operations.
You can have standard vectors or row/column vectors if you
like.
Until Python 3.5 the only disadvantage of using the array type
was that you had to use dot instead of * to multiply (reduce) two
tensors (scalar product, matrix vector multiplication etc.). Since
Python 3.5 you can use the matrix multiplication # operator.
Given the above, we intend to deprecate matrix eventually.
Just to add one case to unutbu's list.
One of the biggest practical differences for me of numpy ndarrays compared to numpy matrices or matrix languages like matlab, is that the dimension is not preserved in reduce operations. Matrices are always 2d, while the mean of an array, for example, has one dimension less.
For example demean rows of a matrix or array:
with matrix
>>> m = np.mat([[1,2],[2,3]])
>>> m
matrix([[1, 2],
[2, 3]])
>>> mm = m.mean(1)
>>> mm
matrix([[ 1.5],
[ 2.5]])
>>> mm.shape
(2, 1)
>>> m - mm
matrix([[-0.5, 0.5],
[-0.5, 0.5]])
with array
>>> a = np.array([[1,2],[2,3]])
>>> a
array([[1, 2],
[2, 3]])
>>> am = a.mean(1)
>>> am.shape
(2,)
>>> am
array([ 1.5, 2.5])
>>> a - am #wrong
array([[-0.5, -0.5],
[ 0.5, 0.5]])
>>> a - am[:, np.newaxis] #right
array([[-0.5, 0.5],
[-0.5, 0.5]])
I also think that mixing arrays and matrices gives rise to many "happy" debugging hours.
However, scipy.sparse matrices are always matrices in terms of operators like multiplication.
As per the official documents, it's not anymore advisable to use matrix class since it will be removed in the future.
https://numpy.org/doc/stable/reference/generated/numpy.matrix.html
As other answers already state that you can achieve all the operations with NumPy arrays.
As others have mentioned, perhaps the main advantage of matrix was that it provided a convenient notation for matrix multiplication.
However, in Python 3.5 there is finally a dedicated infix operator for matrix multiplication: #.
With recent NumPy versions, it can be used with ndarrays:
A = numpy.ones((1, 3))
B = numpy.ones((3, 3))
A # B
So nowadays, even more, when in doubt, you should stick to ndarray.
Matrix Operations with Numpy Arrays:
I would like to keep updating this answer
about matrix operations with numpy arrays if some users are interested looking for information about matrices and numpy.
As the accepted answer, and the numpy-ref.pdf said:
class numpy.matrix will be removed in the future.
So now matrix algebra operations has to be done
with Numpy Arrays.
a = np.array([[1,3],[-2,4]])
b = np.array([[3,-2],[5,6]])
Matrix Multiplication (infix matrix multiplication)
a#b
array([[18, 16],
[14, 28]])
Transpose:
ab = a#b
ab.T
array([[18, 14],
[16, 28]])
Inverse of a matrix:
np.linalg.inv(ab)
array([[ 0.1 , -0.05714286],
[-0.05 , 0.06428571]])
ab_i=np.linalg.inv(ab)
ab#ab_i # proof of inverse
array([[1., 0.],
[0., 1.]]) # identity matrix
Determinant of a matrix.
np.linalg.det(ab)
279.9999999999999
Solving a Linear System:
1. x + y = 3,
x + 2y = -8
b = np.array([3,-8])
a = np.array([[1,1], [1,2]])
x = np.linalg.solve(a,b)
x
array([ 14., -11.])
# Solution x=14, y=-11
Eigenvalues and Eigenvectors:
a = np.array([[10,-18], [6,-11]])
np.linalg.eig(a)
(array([ 1., -2.]), array([[0.89442719, 0.83205029],
[0.4472136 , 0.5547002 ]])
An advantage of using matrices is for easier instantiation through text rather than nested square brackets.
With matrices you can do
np.matrix("1, 1+1j, 0; 0, 1j, 0; 0, 0, 1")
and get the desired output directly:
matrix([[1.+0.j, 1.+1.j, 0.+0.j],
[0.+0.j, 0.+1.j, 0.+0.j],
[0.+0.j, 0.+0.j, 1.+0.j]])
If you use arrays, this does not work:
np.array("1, 1+1j, 0; 0, 1j, 0; 0, 0, 1")
output:
array('1, 1+1j, 0; 0, 1j, 0; 0, 0, 1', dtype='<U29')

Given a MxN grid in the x,y plane, compute f(x,y) and store it into matrix (python)

I'm looking for something similar to ARRAYFUN in MATLAB, but for Python. What I need to do is to compute a matrix whose components are exp(j*dot([kx,ky], [x,y])), where [kx,ky] is a fixed known vector, and [x,y] is an element from a meshgrid.
What I was trying to do is to define
RX, RY = np.meshgrid(np.arange(N), np.arange(M))
R = np.dstack((RX,RY))
and then iterate over the R indices, filling a matrix with the same shape as R, in which each component would be exp(j*dot([kx,ky], [x,y])), with [x,y] being in R. This doesn't look efficient nor elegant.
Thanks for your help.
You could do what we used to do in MATLAB before they added ARRAYFUN - change the calculation so it works with arrays. That could be tricky in the days when everything in MATLAB was 2d; allowing more dimensions made it easier. numpy allows more than 2 dimensions.
Anyways, here a quick attempt:
In [497]: rx,ry=np.meshgrid(np.arange(3),np.arange(4))
In [498]: R=np.dstack((rx,ry))
In [499]: R.shape
Out[499]: (4, 3, 2)
In [500]: kx,ky=1,2
In [501]: np.einsum('i,jki->jk',[kx,ky],R)
Out[501]:
array([[0, 1, 2],
[2, 3, 4],
[4, 5, 6],
[6, 7, 8]])
There are other versions of dot, matmul and tensordot, but einsum is the one I like to use. I've worked with it enough to quickly set up a multidimensional dot.
Now just apply the 1j and exp to each element:
In [502]: np.exp(np.einsum('i,jki->jk',[kx,ky],R)*1j)
Out[502]:
array([[ 1.00000000+0.j , 0.54030231+0.84147098j,
-0.41614684+0.90929743j],
[-0.41614684+0.90929743j, -0.98999250+0.14112001j,
-0.65364362-0.7568025j ],
[-0.65364362-0.7568025j , 0.28366219-0.95892427j,
0.96017029-0.2794155j ],
[ 0.96017029-0.2794155j , 0.75390225+0.6569866j ,
-0.14550003+0.98935825j]])

Need help converting Matlab's bsxfun to numpy

I'm trying to convert a piece of MATLAB code, and this is a line I'm struggling with:
f = 0
wlab = reshape(bsxfun(#times,cat(3,1-f,f/2,f/2),lab),[],3)
I've come up with
wlab = lab*(np.concatenate((3,1-f,f/2,f/2)))
How do I reshape it now?
Not going to do it for your code, but more as a general knowledge:
bsxfun is a function that fills a gap in MATLAB that python doesn't need to fill: broadcasting.
Broadcasting is a thing where if a matrix that is being multiplied/added/whatever similar is not the same size as the other one being used, the matrix will be repeated.
So in python, if you have a 3D matrix A and you want to multiply every 2D slice of it with a matrix B that is 2D, you dont need anything else, python will broadcast B for you, it will repeat the matrix again and again. A*B will suffice. However, in MATLAB that will raise an error Matrix dimension mismatch. To overcome that, you'd use bsxfun as bsxfun(#times,A,B) and this will broadcast (repeat) B over the 3rd dimension of A.
This means that converting bsxfun to python generally requires nothing.
MATLAB
reshape(x,[],3)
is the equivalent of numpy
np.reshape(x,(-1,3))
the [] and -1 are place holders for 'fill in the correct shape here'.
===============
I just tried the MATLAB expression is Octave - it's on a different machine, so I'll just summarize the action.
For lab=1:6 (6 elements) the bsxfun produces a (1,6,3) matrix; the reshape turns it into (6,3), i.e. just removes the first dimension. The cat produces a (1,1,3) matrix.
np.reshape(np.array([1-f,f/2,f/2])[None,None,:]*lab[None,:,None],(-1,3))
For lab with shape (n,m), the bsxfun produces a (n,m,3) matrix; the reshape would make it (n*m,3)
So for a 2d lab, the numpy needs to be
np.array([1-f,f/2,f/2])[None,None,:]*lab[:,:,None]
(In MATLAB the lab will always be 2d (or larger), so this 2nd case it closer to its action even if n is 1).
=======================
np.array([1-f,f/2,f/2])*lab[...,None]
would handle any shaped lab
If I make the Octave lab (4,2,3), the `bsxfun is also (4,2,3)
The matching numpy expression would be
In [94]: (np.array([1-f,f/2,f/2])*lab).shape
Out[94]: (4, 2, 3)
numpy adds dimensions to the start of the (3,) array to match the dimensions of lab, effectively
(np.array([1-f,f/2,f/2])[None,None,:]*lab) # for 3d lab
If f=0, then the array is [1,0,0], so this has the effect of zeroing values on the last dimension of lab. In effect, changing the 'color'.
It is equivalent to
import numpy as np
wlab = np.kron([1-f,f/2,f/2],lab.reshape(-1,1))
In Python, if you use numpy you do not need to do any broadcasting, as this is done automatically for you.
For instance, looking at the following code should make it clearer:
>>> import numpy as np
>>> a = np.array([[1, 2, 3], [3, 4, 5], [6, 7, 8], [9, 10, 100]])
>>> b = np.array([1, 2, 3])
>>>
>>> a
array([[ 1, 2, 3],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 100]])
>>> b
array([1, 2, 3])
>>>
>>> a - b
array([[ 0, 0, 0],
[ 2, 2, 2],
[ 5, 5, 5],
[ 8, 8, 97]])
>>>

How to choose axis value in numpy array

I am a new user to numpy and I was using numpy delete, where it mention that to delete horizontal row we should use axis=0 but in other documentation of numpy glossary, it says horizontal axis is 1. It would be great if someone can let me know what is wrong in my understanding.
An array is a systematic way of structuring numbers in grids of any dimensionality. The grid directions have labels, and these labels come from a convention of how new dimensions are added to a grid.
Here's the convention:
The simplest such grid is a 0-dimensional (0D) array, which has no axes and can only hold a scalar. This is a 0D array:
42
If we start putting scalars into a list we get a 1D array. This new grid only has one axis, and if we want to label that axis with a number, we better start with something simple - like axis=0! A 1D array could be:
# ----0--->
[42, π, √2]
Now we want to create an array of 1D arrays, which will give us a 2D array. The horizontal axis will still be 0, but the new vertical axis will get the next lowest number we know, axis=1. Here's what it could look like:
# ----0---->
[[42, π, √2], # |
[1, 2, 3], # 1
[10, 20, 30]] # V
The true beauty is that this generalizes to infinity. If we need a box of numbers we'd create a 3D array by stacking 2D arrays, and the direction that traces the depth of the box would naturally have to be axis=2. If we wanted a 4D array, we would just make a list of boxes (3D arrays), and call every box using an index along axis=3. This can go on forever.
In NumPy:
Any function/method that takes an axis-argument uses this convention. For a 2D array this means that doing something like np.delete(X, [1, 2, 3], axis=0) will iterate over arrays extruded along the 0'th axis, to return X without rows 1, 2 and 3. The same logic applies for getting values from an array.
X[rows_along_0th_axis, columns_along_1st_axis, ..., vectors_along_nth_axis]
Taking from the links that you provided, here the excerpts from numpy delete and glossary that probably caused you some confusions and the clarification in the following.
Excerpt
>>> arr = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
>>> arr
array([[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9, 10, 11, 12]])
>>> np.delete(arr, 1, 0)
array([[ 1, 2, 3, 4],
[ 9, 10, 11, 12]])
Excerpt
the first running vertically downwards across rows (axis 0), and the
second running horizontally across columns (axis 1)
I think the confusion derives from the words vertically and horizontally in the second excerpt.
What the second excerpt means is that by setting axis it is possible to decide over which dimension to move. For example, in a 2d matrix, axis=0 corresponds to iterating over the rows (thus moving vertically over the array), while axis=1 corresponds
to iterating over columns (so moving horizontally over the array). It does not say that axis=1 corresponds to the horizontal axis as the OP understood.
The delete function follows the above description, as indeed, by using np.delete(arr, 1, axis=0), the function iterates over the rows, and deletes the row with index 1. If, instead, columns should be deleted, then axis=1. For example, on the same array arr
>>> np.delete(arr, [0,1,4], axis=1)
array([[ 3, 4],
[ 7, 8],
[11, 12]])
in which delete iterates over the columns, and the columns with indices 0, 1 are deleted, and nothing else is deleted as column with index 4 does not exist.

Categories

Resources