"Vectorized" Matrix-Vector multiplication in numpy - python

I have an $I$-indexed array $V = (V_i)_{i \in I}$ of (column) vectors $V_i$, which I want to multiply pointwise (along $i \in I$) by a matrix $M$. So I'm looking for a "vectorized" operation, wherein the individual operation is a multiplication of a matrix with a vector; that is
$W = (M V_i)_{i \in I}$
Is there a numpy way to do this?
numpy.dot unfortunately assumes that $V$ is a matrix, instead of an $I$-indexed family of vectors, which obviously fails.
So basically I want to "vectorize" the operation
W = [np.dot(M, V[i]) for i in range(N)]
Considering the 2D array V as a list (first index) of column vectors (second index).
If
shape(M) == (2, 2)
shape(V) == (N, 2)
Then
shape(W) == (N, 2)

EDIT:
Based on your iterative example, it seems it can be done with a dot product with some transposes to match the shapes. This is the same as (M#V.T).T which is the transpose of M # V.T.
# Step by step
((2,2) # (5,2).T).T
-> ((2,2) # (2,5)).T
-> (2,5).T
-> (5,2)
Code to prove this is as follows. Your iterative output results in a matrix W which is exactly equal to the solutions matrix.
M = np.random.random((2,2))
V = np.random.random((5,2))
# YOUR ITERATIVE SOLUTION (STACKED AS MATRIX)
W = np.stack([np.dot(M, V[i]) for i in range(5)])
print(W)
#array([[0.71663319, 0.84053871],
# [0.28626354, 0.36282745],
# [0.26865497, 0.55552295],
# [0.40165606, 0.10177711],
# [0.33950909, 0.54215385]])
# PROPOSED DOT PRODUCt
solution = (M#V.T).T #<---------------
print(solution)
#array([[0.71663319, 0.84053871],
# [0.28626354, 0.36282745],
# [0.26865497, 0.55552295],
# [0.40165606, 0.10177711],
# [0.33950909, 0.54215385]])
np.allclose(W, solution) #compare the 2 matrices
True
IIUC, your ar elooking for a pointwise multiplication of a matrix M and vector V (with broadcasting).
The matrix here is (3,3), while V is an array with 4 column vectors, each of which you want to independently multiply with the matrix while obeying broadcasting rules.
# Broadcasting Rules
M -> 3, 3
V -> 4, 1, 3 #V.T[:,None,:]
----------------
R -> 4, 3, 3
----------------
Code for this -
M = np.array([[1,1,1],
[0,0,0],
[1,1,1]]) #3,3 matrix M
V = np.array([[1,2,3,4],
[1,2,3,4], #4,3 indexed vector
[1,2,3,4]]) #store 4 column vectors
R = M * V.T[:,None,:] #<--------------
R
array([[[1, 1, 1],
[0, 0, 0],
[1, 1, 1]],
[[2, 2, 2],
[0, 0, 0],
[2, 2, 2]],
[[3, 3, 3],
[0, 0, 0],
[3, 3, 3]],
[[4, 4, 4],
[0, 0, 0],
[4, 4, 4]]])
Post this if you have any aggregation, you can reduce the matrix with the required operations.
Example, Matrix M * Column vector [1,1,1] results in -
array([[[1, 1, 1],
[0, 0, 0],
[1, 1, 1]],
while, Matrix M * Column vector [4,4,4] results in -
array([[[4, 4, 4],
[0, 0, 0],
[4, 4, 4]],

With
shape(M) == (2, 2)
shape(V) == (N, 2)
and
W = [np.dot(M, V[i]) for i in range(N)]
V[i] is (2,), so np.dot(M,V[i]) is (2,2) with(2,) => (2,) with sum-of-products on the last 2 of M. np.array(W) is then (N,2) shape
For 2d A,B, np.dot(A,B) does sum-of-products with the last dimension of A and 2nd to the last of B. You want the last dim of M with the last of V.
One way is:
np.dot(M,V.T).T # (2,2) with (2,N) => (2,N) => (N,2)
(M#V.T).T # with the matmul operator
Sometimes einsum makes the relation between axes clearer:
np.einsum('ij,nj->ni',M,V)
np.einsum('ij,jn->in',M,V.T).T # with j in last/2nd last positions
Or switching the order of V and M:
V # M.T # 'nj,ji->ni'
Or treating the N dimension as a batch, we could make V[:,:,None] (N,2,1). This could be thought of as N (2,1) "column vectors".
M # V[:,:,None] # (N,2,1)
np.einsum('ij,njk->nik', M, V[:,:,None]) # again j is in the last/2nd last slots
Numerically:
In [27]: M = np.array([[1,2],[3,4]]); V = np.array([[1,2],[2,3],[3,4]])
In [28]: [M#V[i] for i in range(3)]
Out[28]: [array([ 5, 11]), array([ 8, 18]), array([11, 25])]
In [30]: (M#V.T).T
Out[30]:
array([[ 5, 11],
[ 8, 18],
[11, 25]])
In [31]: V#M.T
Out[31]:
array([[ 5, 11],
[ 8, 18],
[11, 25]])
Or the batched:
In [32]: M#V[:,:,None]
Out[32]:
array([[[ 5],
[11]],
[[ 8],
[18]],
[[11],
[25]]])
In [33]: np.squeeze(M#V[:,:,None])
Out[33]:
array([[ 5, 11],
[ 8, 18],
[11, 25]])

Related

how to multiply each row from one matrix to every rows to another matrix on Python?

A and B matrices will be different when i run the program
A = np.array([[1, 1, 1], [2, 2, 2]])
B = np.array([[1, 1, 1], [2, 2, 2], [3, 3, 3]])
The output matrix (C) should be the same dimension as matrix A.
As title says, I'm trying to multiply each row from one matrix (A) to every rows to another matrix (B) and would like to sum them.
For example,
Dimension of C = (2,3)
C = [[A(0)*B(0) + A(1)*B(0)], [A(0)*B(1) + A(1)*B(1)],[A(0)*B(1) + A(1)*B(1)]]
I would like to know if there is a numpy function does that.
Use numpy broadcasting:
C = (A * B[:, None]).sum(axis=1)
Output:
>>> C
array([[3, 3, 3],
[6, 6, 6],
[9, 9, 9]])

selective row sum matrix in numpy

Is there any efficient numpy way to do the following:
Assume I have some matix M of size R X C. Now assume I have another matrix
E which is of shape R X a (where a is just some constant a < C), which contains row indices of
M (and -1 for padding, i.e., every element of E is in {-1, 0, .., R-1}). For example,
M=array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
E = array([[ 0, 1],
[ 2, -1],
[-1, 0]])
Now, given those matrices, I want to generate a third matrix P, where the i'th row of P will
contain the sum of the following rows of M : E[i,:]. In the example, P will be,
P[0,:] = M[0,:] + M[1,:]
P[1,:] = M[2,:]
P[2,:] = M[0,:]
Yes, doing it with a loop is pretty straight forward and easy, I was wondering if there is
any fancy numpy way to make it more efficient (assuming that I want to do it with large matrices,
e.g., 200 X 200.
Thanks!
One way would be to sum with indexed on original array and then subtract out the summations caused by the last indexed ones by -1s -
out = M[E].sum(1) - M[-1]*(E==-1).sum(1)[:,None]
Another way would be pad zeros at the end of M, so that those -1 would index into those zeros and hence have no effect on the final sum after indexing -
M1 = np.vstack((M, np.zeros((1,M.shape[1]), dtype=M.dtype)))
out = M1[E].sum(1)
If there is exactly one or lesser -1 per row in E, we can optimize further -
out = M[E].sum(1)
m = (E==-1).any(1)
out[m] -= M[-1]
Another based on tensor-multiplication -
np.einsum('ij,kli->kj',M, (E[...,None]==np.arange(M.shape[1])))
You could index M with E, and np.sum where the actual indices in E are greater or equal to 0. For that we have the where parameter:
np.sum(M[E], where=(E>=0)[...,None], axis=1)
array([[5, 7, 9],
[7, 8, 9],
[1, 2, 3]])
Where we have that:
M[E]
array([[[1, 2, 3],
[4, 5, 6]],
[[7, 8, 9],
[7, 8, 9]],
[[7, 8, 9],
[1, 2, 3]]])
Is added on the rows:
(E>=0)[...,None]
array([[[ True],
[ True]],
[[ True],
[False]],
[[False],
[ True]]])
Probably not the fastest but maybe educational: The operation you are describing can be thought of as matrix multiplication with a certain adjacency matrix:
from scipy import sparse
# construct adjacency matrix
indices = E[E!=-1]
indptr = np.concatenate([[0],np.count_nonzero(E!=-1,axis=1).cumsum()])
data = np.ones_like(indptr)
aux = sparse.csr_matrix((data,indices,indptr))
# multiply
aux*M
# array([[5, 7, 9],
# [7, 8, 9],
# [1, 2, 3]], dtype=int64)

Repeat specific row or column of Python numpy 2D array [duplicate]

I'd like to copy a numpy 2D array into a third dimension. For example, given the 2D numpy array:
import numpy as np
arr = np.array([[1, 2], [1, 2]])
# arr.shape = (2, 2)
convert it into a 3D matrix with N such copies in a new dimension. Acting on arr with N=3, the output should be:
new_arr = np.array([[[1, 2], [1,2]],
[[1, 2], [1, 2]],
[[1, 2], [1, 2]]])
# new_arr.shape = (3, 2, 2)
Probably the cleanest way is to use np.repeat:
a = np.array([[1, 2], [1, 2]])
print(a.shape)
# (2, 2)
# indexing with np.newaxis inserts a new 3rd dimension, which we then repeat the
# array along, (you can achieve the same effect by indexing with None, see below)
b = np.repeat(a[:, :, np.newaxis], 3, axis=2)
print(b.shape)
# (2, 2, 3)
print(b[:, :, 0])
# [[1 2]
# [1 2]]
print(b[:, :, 1])
# [[1 2]
# [1 2]]
print(b[:, :, 2])
# [[1 2]
# [1 2]]
Having said that, you can often avoid repeating your arrays altogether by using broadcasting. For example, let's say I wanted to add a (3,) vector:
c = np.array([1, 2, 3])
to a. I could copy the contents of a 3 times in the third dimension, then copy the contents of c twice in both the first and second dimensions, so that both of my arrays were (2, 2, 3), then compute their sum. However, it's much simpler and quicker to do this:
d = a[..., None] + c[None, None, :]
Here, a[..., None] has shape (2, 2, 1) and c[None, None, :] has shape (1, 1, 3)*. When I compute the sum, the result gets 'broadcast' out along the dimensions of size 1, giving me a result of shape (2, 2, 3):
print(d.shape)
# (2, 2, 3)
print(d[..., 0]) # a + c[0]
# [[2 3]
# [2 3]]
print(d[..., 1]) # a + c[1]
# [[3 4]
# [3 4]]
print(d[..., 2]) # a + c[2]
# [[4 5]
# [4 5]]
Broadcasting is a very powerful technique because it avoids the additional overhead involved in creating repeated copies of your input arrays in memory.
* Although I included them for clarity, the None indices into c aren't actually necessary - you could also do a[..., None] + c, i.e. broadcast a (2, 2, 1) array against a (3,) array. This is because if one of the arrays has fewer dimensions than the other then only the trailing dimensions of the two arrays need to be compatible. To give a more complicated example:
a = np.ones((6, 1, 4, 3, 1)) # 6 x 1 x 4 x 3 x 1
b = np.ones((5, 1, 3, 2)) # 5 x 1 x 3 x 2
result = a + b # 6 x 5 x 4 x 3 x 2
Another way is to use numpy.dstack. Supposing that you want to repeat the matrix a num_repeats times:
import numpy as np
b = np.dstack([a]*num_repeats)
The trick is to wrap the matrix a into a list of a single element, then using the * operator to duplicate the elements in this list num_repeats times.
For example, if:
a = np.array([[1, 2], [1, 2]])
num_repeats = 5
This repeats the array of [1 2; 1 2] 5 times in the third dimension. To verify (in IPython):
In [110]: import numpy as np
In [111]: num_repeats = 5
In [112]: a = np.array([[1, 2], [1, 2]])
In [113]: b = np.dstack([a]*num_repeats)
In [114]: b[:,:,0]
Out[114]:
array([[1, 2],
[1, 2]])
In [115]: b[:,:,1]
Out[115]:
array([[1, 2],
[1, 2]])
In [116]: b[:,:,2]
Out[116]:
array([[1, 2],
[1, 2]])
In [117]: b[:,:,3]
Out[117]:
array([[1, 2],
[1, 2]])
In [118]: b[:,:,4]
Out[118]:
array([[1, 2],
[1, 2]])
In [119]: b.shape
Out[119]: (2, 2, 5)
At the end we can see that the shape of the matrix is 2 x 2, with 5 slices in the third dimension.
Use a view and get free runtime! Extend generic n-dim arrays to n+1-dim
Introduced in NumPy 1.10.0, we can leverage numpy.broadcast_to to simply generate a 3D view into the 2D input array. The benefit would be no extra memory overhead and virtually free runtime. This would be essential in cases where the arrays are big and we are okay to work with views. Also, this would work with generic n-dim cases.
I would use the word stack in place of copy, as readers might confuse it with the copying of arrays that creates memory copies.
Stack along first axis
If we want to stack input arr along the first axis, the solution with np.broadcast_to to create 3D view would be -
np.broadcast_to(arr,(3,)+arr.shape) # N = 3 here
Stack along third/last axis
To stack input arr along the third axis, the solution to create 3D view would be -
np.broadcast_to(arr[...,None],arr.shape+(3,))
If we actually need a memory copy, we can always append .copy() there. Hence, the solutions would be -
np.broadcast_to(arr,(3,)+arr.shape).copy()
np.broadcast_to(arr[...,None],arr.shape+(3,)).copy()
Here's how the stacking works for the two cases, shown with their shape information for a sample case -
# Create a sample input array of shape (4,5)
In [55]: arr = np.random.rand(4,5)
# Stack along first axis
In [56]: np.broadcast_to(arr,(3,)+arr.shape).shape
Out[56]: (3, 4, 5)
# Stack along third axis
In [57]: np.broadcast_to(arr[...,None],arr.shape+(3,)).shape
Out[57]: (4, 5, 3)
Same solution(s) would work to extend a n-dim input to n+1-dim view output along the first and last axes. Let's explore some higher dim cases -
3D input case :
In [58]: arr = np.random.rand(4,5,6)
# Stack along first axis
In [59]: np.broadcast_to(arr,(3,)+arr.shape).shape
Out[59]: (3, 4, 5, 6)
# Stack along last axis
In [60]: np.broadcast_to(arr[...,None],arr.shape+(3,)).shape
Out[60]: (4, 5, 6, 3)
4D input case :
In [61]: arr = np.random.rand(4,5,6,7)
# Stack along first axis
In [62]: np.broadcast_to(arr,(3,)+arr.shape).shape
Out[62]: (3, 4, 5, 6, 7)
# Stack along last axis
In [63]: np.broadcast_to(arr[...,None],arr.shape+(3,)).shape
Out[63]: (4, 5, 6, 7, 3)
and so on.
Timings
Let's use a large sample 2D case and get the timings and verify output being a view.
# Sample input array
In [19]: arr = np.random.rand(1000,1000)
Let's prove that the proposed solution is a view indeed. We will use stacking along first axis (results would be very similar for stacking along the third axis) -
In [22]: np.shares_memory(arr, np.broadcast_to(arr,(3,)+arr.shape))
Out[22]: True
Let's get the timings to show that it's virtually free -
In [20]: %timeit np.broadcast_to(arr,(3,)+arr.shape)
100000 loops, best of 3: 3.56 µs per loop
In [21]: %timeit np.broadcast_to(arr,(3000,)+arr.shape)
100000 loops, best of 3: 3.51 µs per loop
Being a view, increasing N from 3 to 3000 changed nothing on timings and both are negligible on timing units. Hence, efficient both on memory and performance!
This can now also be achived using np.tile as follows:
import numpy as np
a = np.array([[1,2],[1,2]])
b = np.tile(a,(3, 1,1))
b.shape
(3,2,2)
b
array([[[1, 2],
[1, 2]],
[[1, 2],
[1, 2]],
[[1, 2],
[1, 2]]])
A=np.array([[1,2],[3,4]])
B=np.asarray([A]*N)
Edit #Mr.F, to preserve dimension order:
B=B.T
Here's a broadcasting example that does exactly what was requested.
a = np.array([[1, 2], [1, 2]])
a=a[:,:,None]
b=np.array([1]*5)[None,None,:]
Then b*a is the desired result and (b*a)[:,:,0] produces array([[1, 2],[1, 2]]), which is the original a, as does (b*a)[:,:,1], etc.
Summarizing the solutions above:
a = np.arange(9).reshape(3,-1)
b = np.repeat(a[:, :, np.newaxis], 5, axis=2)
c = np.dstack([a]*5)
d = np.tile(a, [5,1,1])
e = np.array([a]*5)
f = np.repeat(a[np.newaxis, :, :], 5, axis=0) # np.repeat again
print('b='+ str(b.shape), b[:,:,-1].tolist())
print('c='+ str(c.shape),c[:,:,-1].tolist())
print('d='+ str(d.shape),d[-1,:,:].tolist())
print('e='+ str(e.shape),e[-1,:,:].tolist())
print('f='+ str(f.shape),f[-1,:,:].tolist())
b=(3, 3, 5) [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
c=(3, 3, 5) [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
d=(5, 3, 3) [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
e=(5, 3, 3) [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
f=(5, 3, 3) [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
Good luck

Pytorch batch matrix vector outer product

I am trying to generate a vector-matrix outer product (tensor) using PyTorch. Assuming the vector v has size p and the matrix M has size qXr, the result of the product should be pXqXr.
Example:
#size: 2
v = [0, 1]
#size: 2X3
M = [[0, 1, 2],
[3, 4, 5]]
#size: 2X2X3
v*M = [[[0, 0, 0],
[0, 0, 0]],
[[0, 1, 2],
[3, 4, 5]]]
For two vectors v1 and v2, I can use torch.bmm(v1.view(1, -1, 1), v2.view(1, 1, -1)). This can be easily extended for a batch of vectors. However, I am not able to find a solution for vector-matrix case. Also, I need to do this operation for batches of vectors and matrices.
You can use torch.einsum operator:
torch.einsum('bp,bqr->bpqr', v, M) # batch-wise operation v.shape=(b,p) M.shape=(b,q,r)
torch.einsum('p,qr->pqr', v, M) # cross-batch operation
I was able to do it with following code.
Single vector and matrix
v = torch.arange(3)
M = torch.arange(8).view(2, 4)
# v: tensor([0, 1, 2])
# M: tensor([[0, 1, 2, 3],
# [4, 5, 6, 7]])
torch.mm(v.unsqueeze(1), M.view(1, 2*4)).view(3,2,4)
tensor([[[ 0, 0, 0, 0],
[ 0, 0, 0, 0]],
[[ 0, 1, 2, 3],
[ 4, 5, 6, 7]],
[[ 0, 2, 4, 6],
[ 8, 10, 12, 14]]])
For a batch of vectors and matrices, it can be easily extended using torch.bmm.
v = torch.arange(batch_size*2).view(batch_size, 2)
M = torch.arange(batch_size*3*4).view(batch_size, 3, 4)
torch.bmm(v.unsqueeze(2), M.view(-1, 1, 3*4)).view(-1, 2, 3, 4)
If [batch_size, z, x, y] is the shape of the target matrix, another solution is building two matrices of this shape with appropriate elements in each position and then apply an elementwise multiplication. It works fine with batch of vectors:
# input matrices
batch_size = 2
x1 = torch.Tensor([0,1])
x2 = torch.Tensor([[0,1,2],
[3,4,5]])
x1 = x1.unsqueeze(0).repeat((batch_size, 1))
x2 = x2.unsqueeze(0).repeat((batch_size, 1, 1))
# dimensions
b = x1.shape[0]
z = x1.shape[1]
x = x2.shape[1]
y = x2.shape[2]
# solution
mat1 = x1.reshape(b, z, 1, 1).repeat(1, 1, x, y)
mat2 = x2.reshape(b,1,x,y).repeat(1, z, 1, 1)
mat1*mat2

multidimensional array multiplication in numpy

I have 2 multidimensional arrays. I want to multiply those arrays.
My both arrays have shape :
shape : (3, 100)
I want to convert matlab code :
sum(q1.*q2)
to
np.dot(q1, q2)
gives me output :
ValueError: objects are not aligned
Use Matrix element wise product * instead of dot product
Here is a sample run with a reduced dimension
Implementation
A = np.random.randint(5,size=(3,4))
B = np.random.randint(5,size=(3,4))
result = A * B
Demo
>>> A
array([[4, 1, 3, 0],
[2, 0, 2, 2],
[0, 1, 1, 1]])
>>> B
array([[1, 3, 0, 2],
[3, 4, 1, 2],
[3, 0, 4, 3]])
>>> A * B
array([[4, 3, 0, 0],
[6, 0, 2, 4],
[0, 0, 4, 3]])
My installation of Octave, when asked to do
sum(a .* b)
with a and b having shape (3, 100), returns an array of shape (1, 100). The exact equivalent in numpy would be:
np.sum(a * b, axis=0)
which returns an array of shape (100,), or if you want to keep the dimensions of size 1:
np.sum(a * b, axis=0, keepdims=True)
You can get the same result, possibly faster, using np.einsum:
np.einsum('ij,ij->j', a, b)

Categories

Resources