Matrix Multiplication in 3,4 axes pytorch - python

I've two tensors of shape a(16,8,8,64) and b(64,64). Suppose, I extract last dimension of ainto another column vector c, I want to compute matmul(matmul(c.T, b), c). I want this to be done in each of the first 3 dimensions of a. That is the final product should be of shape (16,8,8,1). How can I achieve this in pytorch?

Can be done as follows:
row_vec = a[:, :, :, None, :].float()
col_vec = a[:, :, :, :, None].float()
b = (b[None, None, None, :, :]).float()
prod = torch.matmul(torch.matmul(row_vec, b), col_vec)

Related

Concatenate 1D array to a 3D array

I have a three dimensional array A, with shape (5774,15,100) and another 1 D array B with shape (5774,). I want to add these in order to get the another matrix C with shape (5774,15,101).
I am using hstack as
C = hstack((A ,np.array(B)[:,None]))
I am getting the below error, any suggesstions.
ValueError: could not broadcast input array from shape (5774,15,100) into shape (5774)
You'd need to use np.concatenate (which can cancatenate arrays of different shape, unlike the various np.*stack methods). Then, you need to use np.broadcast_to to get that (5774,) shaped array to (5774, 15, 1) (because concatenate still needs all the arrays to have the same number of dimensions).
C = np.concatenate((A,
np.broadcast_to(np.array(B)[:, None, None], A.shape[:-1] + (1,))),
axis = -1)
Checking:
A = np.random.rand(5774, 15, 100)
B = np.random.rand(5774)
C = np.concatenate((A,
np.broadcast_to(np.array(B)[:, None, None], A.shape[:-1] + (1,))),
axis = -1)
C.shape
Out: (5774, 15, 101)

Numpy - Multiple Outer Products

I was wondering if there's a way to compute multiple outer products and stack the results in a single operation.
Say I have an Nx1 vector and take the outer product with a 1xM vector, the result will be an NxM matrix.
What if I had an NxR matrix A, and an RxM matrix B. Is it possible to construct an NxMxR matrix where each layer of the output matrix is the outer product of the corresponding column of A and row of B?
I know it's really easy to do this in a single for loop over R, but I wanted to know if there's a faster way using numpy builtins (as there usually is when numpy is concerned).
I haven't been able to figure out a set of indices that work with einsum (and I'm not even sure if einsum is the right approach, since there is no summation involved here)
Yes, of course, using broadcasting or Einsum (the fact that there is no summation does not matter)
N, M, R = 8, 9, 16
A = numpy.random.rand(N)
B = numpy.random.rand(M)
C = A[:, None] * B[None, :]
D = numpy.einsum('a,b->ab', A, B)
numpy.allclose(C, D)
# True
C.shape
# (8, 9)
A = numpy.random.rand(N, R)
B = numpy.random.rand(M, R)
C = A[:, None, :] * B[None, :, :]
D = numpy.einsum('ar,br->abr', A, B)
numpy.allclose(C, D)
# True
C.shape
# (8, 9, 16)

tensorflow multiplication across dimension

Let tensor T has shape [B, N, N, 6] and I want to multiply matrices [b, N, N, 0:3] by [b, N, N, 5] element-wise for each b in range(B). Note, that [N, N, 4] should not be changed. What is the best way to do this using tensorflow?
My attempts:
result = tf.empty([B, N, N, 5])
for b in range(B):
for i in range(4)
result[b, :, :, i] = tf.mul(T[b, :, :, i], T[b, :, :, 5])
result[b, :, :, 4] = T[b, :, :, 4]
In TensorFlow, it's not generally possible to build a tensor value by assigning to slices. The programming model tends to be more functional than imperative. One way of implementing your calculation is as follows:
result = tf.concat(3, [tf.mul(T[:, :, :, 0:4], T[:, :, :, 5:6]), T[:, :, :, 4:5]])
Note that you don't need multiple multiplications, because (i) the original computation is already element-wise on the 0th dimension (for b in range(B)), and (ii) TensorFlow will broadcast the second argument to the multiplication in the 3rd dimension.

Matrix triple product with theano

This is pretty much the same question as here Matrix/Tensor Triple Product? , but for theano.
So I have three matrices A, B, C of sizes n*r, m*r, l*r, and I want to compute the 3D tensor of shape (n,m,l) resulting from the triple (trilinear) product:
X[i,j,k] = \sum_a A[i,a] B[j,a] C[k,a]
A, B and C are shared variables:
A = theano.shared(numpy.random.randn(n,r))
B = theano.shared(numpy.random.randn(m,r))
C = theano.shared(numpy.random.randn(l,r))
I'd like to write it with a single theano expression, is there a way to do so?
If there are many, which one is the fastest?
np.einsum('nr,mr,lr->nml', A, B, C)
is equivalent to
np.dot(A[:, None, :] * B[None, :, :], C.T)
which can be implemented in Theano as
theano.dot(A[:, None, :] * B[None, :, :], C.T)

Fast way to add calculated value element to multidimensional numpy array

I've got a numpy array 'image' that is a two dimensional array where each element has two components. I want to convert this to another two dimensional array where each element has three components. The first two and a third one calculated from the first two, like so:
for x in range(0, width):
for y in range(0, height):
horizontal, vertical = image[y, x]
annotated_image[y, x] = (horizontal, vertical, int(abs(horizontal) > 1.0 or abs(vertical) > 1.0))
This loop works as expected, but is very slow when compared to other numpy functions. For a medium-sized image this takes an unacceptable 30 seconds.
Is there a different way to do the same calculation but faster? The original image array does not have to be preserved.
You could just separate the components of the image and work with multiple images instead:
image_component1 = image[:, :, 0]
image_component2 = image[:, :, 1]
result = (np.abs(image_component1) > 1.) | (np.abs(image_component2) > 1.)
If you for some reason need the layout you specified you could as well construct another three dimensional image:
result = np.empty([image.shape[0], image.shape[1], 3], dtype=image.dtype)
result[:, :, 0] = image[:, :, 0]
result[:, :, 1] = image[:, :, 1]
result[:, :, 2] = (np.abs(image[:, :, 0]) > 1.) | (np.abs(image[:, :, 1]) > 1.)

Categories

Resources