I have a 2 x 2 numpy.array() matrix, and an array N x 2 X, containing N 2-dimensional vectors.
I want to multiply each vector in X by the 2 x 2 matrix. Below I use a for loop, but I am sure there is a faster way. Please, could someone show me what it is? I assume there is a way using a numpy function.
# the matrix I want to multiply X by
matrix = np.array([[0, 1], [-1, 0]])
# initialize empty solution
Y = np.empty((N, 2))
# loop over each vector in X and create a new vector Y with the result
for i in range(0, N):
Y[i] = np.dot(matrix, X[i])
For example, these arrays:
matrix = np.array([
[0, 1],
[0, -1]
])
X = np.array([
[0, 0],
[1, 1],
[2, 2]
])
Should result in:
Y = np.array([
[0, 0],
[1, -1],
[2, -2]
])
One-liner is (matrix # X.T).T
Just transpose your X, to get your vectors in columns. Then matrix # X.T or (np.dot(matrix, X.T) if you prefer this solution, but now that # notation exists, why not using it) is a matrix made of columns of matrix times X[i]. Just transpose back the result if you need Y to be made of lines of results
matrix = np.array([[0, 1], [-1, 0]])
X = np.array([[1,2],[3,4],[5,6]])
Y = (matrix # X.T).T
Y is
array([[ 2, -1],
[ 4, -3],
[ 6, -5]])
As expected, I guess.
In detail:
X is
array([[1, 2],
[3, 4],
[5, 6]])
so X.T is
array([[1, 3, 5],
[2, 4, 6]])
So, you can multiply your 2x2 matrix by this 2x3 matrix, and the result will be a 2x3 matrix whose columns are the result of multiplication of matrix by the column of this. matrix # X.T is
array([[ 2, 4, 6],
[-1, -3, -5]])
And transposing back this gives the already given result.
So, tl;dr: one-liner answer is (matrix # X.T).T
You are doing some kind of matrix multiplication with (2,2) matrix and each (2,1) X line.
You need to make all your vectors the same dimension to directly calculate this. Add a dimension with None and directly calculate Y like this :
matrix = np.array([[3, 1], [-1, 0.1]])
N = 10
Y = np.empty((N, 2))
X =np.ones((N,2))
X[0][0] = 2
X[5][1] = 3
# loop over each vector in X and create a new vector Y with the result
for i in range(0, N):
Y[i] = np.dot(matrix, X[i])
Ydirect = matrix[None,:] # X[:,:,None]
print(Y)
print(Ydirect[:,:,0])
You can vectorize Adrien's result and remove the for loop, which will optimize performance, especially as the matrices get bigger.
matrix = np.array([[3, 1], [-1, 0.1]])
N = 10
X = np.ones((N, 2))
X[0][0] = 2
X[5][1] = 3
# calculate dot product using # operator
Y = matrix # X.T
print(Y)
Related
I have an $I$-indexed array $V = (V_i)_{i \in I}$ of (column) vectors $V_i$, which I want to multiply pointwise (along $i \in I$) by a matrix $M$. So I'm looking for a "vectorized" operation, wherein the individual operation is a multiplication of a matrix with a vector; that is
$W = (M V_i)_{i \in I}$
Is there a numpy way to do this?
numpy.dot unfortunately assumes that $V$ is a matrix, instead of an $I$-indexed family of vectors, which obviously fails.
So basically I want to "vectorize" the operation
W = [np.dot(M, V[i]) for i in range(N)]
Considering the 2D array V as a list (first index) of column vectors (second index).
If
shape(M) == (2, 2)
shape(V) == (N, 2)
Then
shape(W) == (N, 2)
EDIT:
Based on your iterative example, it seems it can be done with a dot product with some transposes to match the shapes. This is the same as (M#V.T).T which is the transpose of M # V.T.
# Step by step
((2,2) # (5,2).T).T
-> ((2,2) # (2,5)).T
-> (2,5).T
-> (5,2)
Code to prove this is as follows. Your iterative output results in a matrix W which is exactly equal to the solutions matrix.
M = np.random.random((2,2))
V = np.random.random((5,2))
# YOUR ITERATIVE SOLUTION (STACKED AS MATRIX)
W = np.stack([np.dot(M, V[i]) for i in range(5)])
print(W)
#array([[0.71663319, 0.84053871],
# [0.28626354, 0.36282745],
# [0.26865497, 0.55552295],
# [0.40165606, 0.10177711],
# [0.33950909, 0.54215385]])
# PROPOSED DOT PRODUCt
solution = (M#V.T).T #<---------------
print(solution)
#array([[0.71663319, 0.84053871],
# [0.28626354, 0.36282745],
# [0.26865497, 0.55552295],
# [0.40165606, 0.10177711],
# [0.33950909, 0.54215385]])
np.allclose(W, solution) #compare the 2 matrices
True
IIUC, your ar elooking for a pointwise multiplication of a matrix M and vector V (with broadcasting).
The matrix here is (3,3), while V is an array with 4 column vectors, each of which you want to independently multiply with the matrix while obeying broadcasting rules.
# Broadcasting Rules
M -> 3, 3
V -> 4, 1, 3 #V.T[:,None,:]
----------------
R -> 4, 3, 3
----------------
Code for this -
M = np.array([[1,1,1],
[0,0,0],
[1,1,1]]) #3,3 matrix M
V = np.array([[1,2,3,4],
[1,2,3,4], #4,3 indexed vector
[1,2,3,4]]) #store 4 column vectors
R = M * V.T[:,None,:] #<--------------
R
array([[[1, 1, 1],
[0, 0, 0],
[1, 1, 1]],
[[2, 2, 2],
[0, 0, 0],
[2, 2, 2]],
[[3, 3, 3],
[0, 0, 0],
[3, 3, 3]],
[[4, 4, 4],
[0, 0, 0],
[4, 4, 4]]])
Post this if you have any aggregation, you can reduce the matrix with the required operations.
Example, Matrix M * Column vector [1,1,1] results in -
array([[[1, 1, 1],
[0, 0, 0],
[1, 1, 1]],
while, Matrix M * Column vector [4,4,4] results in -
array([[[4, 4, 4],
[0, 0, 0],
[4, 4, 4]],
With
shape(M) == (2, 2)
shape(V) == (N, 2)
and
W = [np.dot(M, V[i]) for i in range(N)]
V[i] is (2,), so np.dot(M,V[i]) is (2,2) with(2,) => (2,) with sum-of-products on the last 2 of M. np.array(W) is then (N,2) shape
For 2d A,B, np.dot(A,B) does sum-of-products with the last dimension of A and 2nd to the last of B. You want the last dim of M with the last of V.
One way is:
np.dot(M,V.T).T # (2,2) with (2,N) => (2,N) => (N,2)
(M#V.T).T # with the matmul operator
Sometimes einsum makes the relation between axes clearer:
np.einsum('ij,nj->ni',M,V)
np.einsum('ij,jn->in',M,V.T).T # with j in last/2nd last positions
Or switching the order of V and M:
V # M.T # 'nj,ji->ni'
Or treating the N dimension as a batch, we could make V[:,:,None] (N,2,1). This could be thought of as N (2,1) "column vectors".
M # V[:,:,None] # (N,2,1)
np.einsum('ij,njk->nik', M, V[:,:,None]) # again j is in the last/2nd last slots
Numerically:
In [27]: M = np.array([[1,2],[3,4]]); V = np.array([[1,2],[2,3],[3,4]])
In [28]: [M#V[i] for i in range(3)]
Out[28]: [array([ 5, 11]), array([ 8, 18]), array([11, 25])]
In [30]: (M#V.T).T
Out[30]:
array([[ 5, 11],
[ 8, 18],
[11, 25]])
In [31]: V#M.T
Out[31]:
array([[ 5, 11],
[ 8, 18],
[11, 25]])
Or the batched:
In [32]: M#V[:,:,None]
Out[32]:
array([[[ 5],
[11]],
[[ 8],
[18]],
[[11],
[25]]])
In [33]: np.squeeze(M#V[:,:,None])
Out[33]:
array([[ 5, 11],
[ 8, 18],
[11, 25]])
I'm trying to do some operation like if there is tensor in pytorch
a = torch.tensor([[1,0]
,[0,1]
,[2,0]
,[3,2]])
b = torch.tensor([[0,1]
,[2,0]])
I want to remove the rows [0,1], [2,0] which are the rows of b from a.
Is there any way to do this?
# result
a = torch.tensor([[1,0]
,[3,2]])
You could do it if the tensor shapes were broadcastable.
For a tensor a of shape (?, d) and a tensor b of shape (d,), you could write something like:
cmp = a.eq(b).all(dim=1).logical_not(), i.e. compare each d-dimensional row of a with b and give me the indices where the comparison is False.
From these you can then easily your new tensor like so:
a = a[cmp]
I doubt you'll find an elegant way of doing this when b itself contains a batch dimension; your best bet would be to write a for loop.
Full example:
>>> xs = torch.tensor([[1,0], [0,1], [2,0], [3,2]])
>>> ys = torch.tensor([[0,1],[2,0]])
>>> for y in ys:
... xs = xs[xs.eq(y).all(dim=1).logical_not()]
>>> xs
tensor([[1, 0],
[3, 2]])
You can do something like this exploiting broadcasting:
import torch
a = torch.tensor([[1, 0], [0, 1], [2, 0], [3, 2]])
b = torch.tensor([[0, 1], [2, 0]])
indices = ((a == b[:, None]).sum(axis = 2) != a.shape[1]).all(axis = 0)
print(indices)
print(a[indices])
indices =
tensor([ True, False, False, True])
a[indices] =
tensor([[1, 0],
[3, 2]])
Works for all tensors a and b of shapes m x n and p x n respectively i.e, the number of columns (a.shape[1]) must be same and you can compare among any no. of rows.
So here i have this problem.
Given 2D numpy arrays 'a' and 'b' of sizes m×n and k×k
respectively (k <= n, k <= m), 2 integers 'stride' and 'padding' and
'f' function. You need to
first pad 'a' matrix with 0s on each side,
then move 'b' over 'a' with stride 'stride', then multiply their elements by the corresponding 'b' elements,
add the resulting k * k numbers
apply the 'f' function to the result
and place them in the new matrix.
a = np.array([[1, 1, 2],
[0, 1, 3],
[1, 3, 0],
[4, 5, 2]])
b = np.array([[1, 0],
[0, 1]])
stride = 1
padding = 0
f = lambda x: x**2
print(conv(a, b, stride, padding, f))
>>[[4, 16],
[9, 1],
[36, 25]]
I don't understand how I should handle it in case if the stride is too large, for example if I set stride=2 in the example above, what will the program do? Will it take at first the [[1,1], [0,1]] then skip to the [[0,1], [1,3]], or somehow differently?
And what functions or method will be useful in this example, I already know how to pad matrices with 0s, but is there something else that could be useful?
def padding(a, padd):
matrix = np.zeros((len(a)+2*padd,len(a[0])+2*padd))
for i in range(len(a)):
for j in range(len(a[0])):
matrix[i+padd,j+padd] = a[i,j]
return matrix
def conv(a, b, stride, padd, f):
output = np.zeros((len(a)-(len(b)-1),len(b)))
c = padding(a,padd)
matrices = []
for i in range(len(output)):
column = stride - 1
for j in range(len(output[0])):
output[i,j] = np.sum(a[i:i+len(b),j+column:j+column+len(b)] * b)
return f(output)
a = np.array([[1, 1, 2],
[0, 1, 3],
[1, 3, 0],
[4, 5, 2]])
b = np.array([[1, 0],
[0, 1]])
stride = 1
pad = 0
f = lambda x: x**2
print(conv(a, b, stride, pad, f))
Hello from 1991 )
This question already has answers here:
Numpy multiply arrays into matrix (outer product)
(2 answers)
Closed 4 years ago.
I'm trying to create a 2x2 array from a size 2 vector x by doing matrix multiplication like x * x^T:
>>> x = np.array([2, 2])
>>> x
array([2, 2])
>>> np.matmul(x,x.T)
8
As you can see, this fails. I came up with this solution:
>>> m = np.matrix(x)
>>> m
matrix([[2, 2]])
>>> m.T
matrix([[2],
[2]])
>>> np.matmul(m.T, m)
matrix([[4, 4],
[4, 4]])
Which achieves what I want. But is there a better way to do this, preferrably without resorting to using np.matrix?
EDIT: Creating a 2x1 vector is not an option because of the context outside the question.
Use np.outer:
np.outer(x, x)
# array([[4, 4],
# [4, 4]])
Alternatively, increase x's dimension by 1 before calling np.matmul:
x = x[:, None] # x = x.reshape(-1, 1)
x.shape
# (2, 1)
x # x.T # (2,1) . (1,2) => (2,2)
# array([[4, 4],
# [4, 4]])
If you reshaped x, you can use the # operator to do the multiplication:
x = np.array([2, 2])
Xprime = x.reshape(len(x), 1)
print(Xprime # Xprime.T)
#[[4 4]
# [4 4]]
np.array([2, 2]) doesn't create a 2x1 vector, it creates a 2 vector. If you want a 2x1 matrix, you need np.array([[2], [2]]). Or you can create a 1x2 matrix with np.array([[2, 2]]) and then do np.matmul(x.T,x)
You do not have a 2x1 vector here, but a 1D vector. You can see that with:
> x.shape
(2,)
To actually create a 2x1 vector, add bracets:
> x = np.array([[2, 2]])
> x.shape
(1,2)
And now you have what you want with:
> np.matmul(x.T,x)
array([[4, 4],
[4, 4]])
or x.T#x in Python3.
I would like to scale an array of shape (h, w) by a factor of n, resulting in an array of shape (h*n, w*n), with the.
Say that I have a 2x2 array:
array([[1, 1],
[0, 1]])
I would like to scale the array to become 4x4:
array([[1, 1, 1, 1],
[1, 1, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1]])
That is, the value of each cell in the original array is copied into 4 corresponding cells in the resulting array. Assuming arbitrary array size and scaling factor, what's the most efficient way to do this?
You should use the Kronecker product, numpy.kron:
Computes the Kronecker product, a composite array made of blocks of the second array scaled by the first
import numpy as np
a = np.array([[1, 1],
[0, 1]])
n = 2
np.kron(a, np.ones((n,n)))
which gives what you want:
array([[1, 1, 1, 1],
[1, 1, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1]])
You could use repeat:
In [6]: a.repeat(2,axis=0).repeat(2,axis=1)
Out[6]:
array([[1, 1, 1, 1],
[1, 1, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1]])
I am not sure if there's a neat way to combine the two operations into one.
scipy.misc.imresize can scale images. It can be used to scale numpy arrays, too:
#!/usr/bin/env python
import numpy as np
import scipy.misc
def scale_array(x, new_size):
min_el = np.min(x)
max_el = np.max(x)
y = scipy.misc.imresize(x, new_size, mode='L', interp='nearest')
y = y / 255 * (max_el - min_el) + min_el
return y
x = np.array([[1, 1],
[0, 1]])
n = 2
new_size = n * np.array(x.shape)
y = scale_array(x, new_size)
print(y)
To scale effectively I use following approach. Works 5 times faster than repeat and 10 times faster that kron. First, initialise target array, to fill scaled array in-place. And predefine slices to win few cycles:
K = 2 # scale factor
a_x = numpy.zeros((h * K, w *K), dtype = a.dtype) # upscaled array
Y = a_x.shape[0]
X = a_x.shape[1]
myslices = []
for y in range(0, K) :
for x in range(0, K) :
s = slice(y,Y,K), slice(x,X,K)
myslices.append(s)
Now this function will do the scale:
def scale(A, B, slices): # fill A with B through slices
for s in slices: A[s] = B
Or the same thing simply in one function:
def scale(A, B, k): # fill A with B scaled by k
Y = A.shape[0]
X = A.shape[1]
for y in range(0, k):
for x in range(0, k):
A[y:Y:k, x:X:k] = B