I have two tensors containing batches of matrices of the same batch size (first dimension) but different matrix structure (all other dimensions).
For example A of shape (n,d,d) and B (n,e,e).
Now I would like to build block diagonals of A and B for all n.
So that the output shape (n,(d+e),(d+e)).
Is there an implementation for a problem like this?
I could only find torch.block_diag which is not suited for dimensions higher than 2.
Unfortunately there's no vectorized implementation, you'd have to loop through the batch:
A = torch.rand((2, 2, 2))
B = torch.rand((2, 3, 3))
C = torch.zeros((2, 5, 5))
for i in range(2):
C[i] = torch.block_diag(A[i], B[i])
Related
I was trying to understand how matrix multiplication works over 2 dimensions in DL frameworks and I stumbled upon an article here.
He used Keras to explain the same and it works for him.
But when I try to reproduce the same code in Pytorch, it fails with the error as in the output of the following code
Pytorch Code:
a = torch.ones((2,3,4))
b = torch.ones((7,4,5))
c = torch.matmul(a,b)
print(c.shape)
Output: RuntimeError: The size of tensor a (2) must match the size of tensor b (7) at non-singleton dimension 0
Keras Code:
a = K.ones((2,3,4))
b = K.ones((7,4,5))
c = K.dot(a,b)
print(c.shape)
Output:(2, 3, 7, 5)
Can somebody explain what is it that I'm doing wrong?
Matrix multiplication (aka matrix dot product) is a well defined algebraic operation taking two 2D matrices.
Deep-learning frameworks (e.g., tensorflow, keras, pytorch) are tuned to operate of batches of matrices, hence they usually implement batched matrix multiplication, that is, applying matrix dot product to a batch of 2D matrices.
The examples you linked to show how matmul processes a batch of matrices:
a = tf.ones((9, 8, 7, 4, 2))
b = tf.ones((9, 8, 7, 2, 5))
c = tf.matmul(a, b)
Note how all but last two dimensions are identical ((9, 8, 7)).
This is NOT the case in your example - the leading ("batch") dimensions are different, hence the error.
Using identical leading dimensions in pytorch:
a = torch.ones((2,3,4))
b = torch.ones((2,4,5))
c = torch.matmul(a,b)
print(c.shape)
results with
torch.Size([2, 3, 5])
If you insist on dot products with different batch dimensions, you will have to explicitly define how to multiply the two tensors. You can do that using the very flexible torch.einsum:
a = torch.ones((2,3,4))
b = torch.ones((7,4,5))
c = torch.einsum('ijk,lkm->ijlm', a, b)
print(c.shape)
Resulting with:
torch.Size([2, 3, 7, 5])
I'm trying to take array
a = [1,5,4,5,7,8,9,8,4,13,43,42]
and array
b = [3,5,6,2,7]
And I want b to be the indexes in a, e.g. a new array that is
[a[b[0]], a[b[1]], a[b[2]], a[b[3]] ...]
So the values in b are indexes into a.
And there are 500k entries in a and 500k in b (approximately).
Is there a fast way to kick in all cores in numpy to do this?
I already do it just fine in for loops and it is sloooooooowwwwww.
Edit to clarify. The solution has to work for 2D and 3D arrays.
so maybe
b = [(2,3), (5,4), (1,2), (1,0)]
and we want
c = [a[b[0], a[b[1], ...]
Not saying it is fast, but the numpy way would simply be:
a[b]
outputs:
array([5, 8, 9, 4, 8])
This can be done in NumPy using advanced indexing. As Christian's answer pointed out, in the 1-D case, you would simply write:
a[b]
and that is equivalent to:
[a[b[x]] for x in range(b.shape[0])]
In higher-dimensional cases, however, you need to have separate lists for each dimension of the indices. Which means, you can't do:
a = np.random.randn(7, 8, 9) # 3D array
b = [(2, 3, 0), (5, 4, 1), (1, 2, 2), (1, 0, 3)]
print(a[b]) # this is incorrect
but you can do:
b0, b1, b2 = zip(*b)
print(a[b0, b1, b2])
you can also use np.take:
print(np.take(a, b))
I solved this by writing a C extension to numpy called Tensor Weighted Interpolative Transfer, in order to get speed and multi-threading. In pure python it is 3 seconds per 200x100x3 image scale and fade across, and in multi-threaded C with 8 cores is 0.5 milliseconds for the same operation.
The core C code ended up being like
t2[dstidxs2[i2] + doff1] += t1[srcidxs2[i2] + soff1] * w1 * ws2[i2];
Where the doff1 is the offset in the destination array etc. The w1 and ws2 are the interpolated weights.
All the code is ultra optimized in C for speed. (not code size or maintainability)
All code is available on https://github.com/RMKeene/twit and on PyPI.
I expect furthur optimization in the future such as special cases if all weights are 1.0.
I have a batch of matrices A with size torch.Size([batch_size, 9, 5]) and weight matrices B with size torch.Size([3, 5, 6]). In Keras, a simple K.dot(A, B) is able to handle the matrix multiplication to give an output with size (batch_size, 9, 3, 6). Here, each row in A is multiplied to the 3 matrices in B to form a (3x6) matrix.
How do you perform a similar operation in torch. From the documentation, torch.bmm requires that A and B must have the same batch size, so I tried this:
B = B.unsqueeze(0).repeat((batch_size, 1, 1, 1))
B.size() # torch.Size([batch_size, 3, 5, 6])
torch.bmm(A,B) # gives an error
RuntimeError: invalid argument 2: expected 3D tensor, got 4D
Well, the error is expected but how do I perform such an operation?
You can use einstein notation to describe the operation you want as bxy,iyk->bxik. So, you can use einsum to calculate it.
torch.einsum('bxy,iyk->bxik', (A, B)) will give you the answer you want.
So I am a little new to using matrices in Python, and I am looking for the best way to perform the following operation.
Say I have a vector of an arbitrary length, like this:
data = np.array(range(255))
And I want to fit this data inside a matrix with a shape like so:
concept = np.zeros((3, 9, 6))
Now, obviously this will not fit, and results in an error:
ValueError: cannot reshape array of size 255 into shape (3,9,6)
What would be the best way to go about fitting as much of the data vector inside the first matrix with the shape (3, 9, 6) while making sure any "overflow" is stored in a second (or third, fourth, etc.) matrix?
Does this make sense?
Basically, I want to be able to take a vector of any size and produce an arbitrary amount of matrices that have the data shaped according to the 3, 9, 6 dimensions.
Thank you for your help.
def each_matrix(a, dims):
size = dims.prod()
padded = np.concatenate([ a, np.zeros(size-1) ])
for i in range(len(padded) / size):
yield padded[i*size : (i+1)*size].reshape(dims)
for matrix in each_matrix(np.array(range(255)),
dims=np.array([ 3, 9, 6 ])):
print(str(matrix) + '\n\n-------\n')
This will fill the last matrix with zeros.
Here is a rough solution to your problem.
def split_padded(a,n):
padding = n - len(data)%n
numOfsplit = int(len(data)/n)+1
print padding, numOfsplit
return np.split(np.concatenate((a,np.zeros(padding))),numOfsplit)
data = np.array(range(255))
splitnum = 3*9*6
splitdata = split_padded(data,splitnum)
for mat in splitdata:
print mat.reshape(3,9,6)
It is very rough and works for 1D input for array.
First, calculating the number of 0 we need to pad in padding and then calculating the number of matrices we can get out of input data in numOfsplit and doing the splitting in last line.
I have the following 3rd order tensors. Both tensors matrices the first tensor containing 100 10x9 matrices and the second containing 100 3x10 matrices (which I have just filled with ones for this example).
My aim is to multiply the matrices as the line up one to one correspondance wise which would result in a tensor with shape: (100, 3, 9) This can be done with a for loop that just zips up both tensors and then takes the dot of each but I am looking to do this just with numpy operators. So far here are some failed attempts
Attempt 1:
import numpy as np
T1 = np.ones((100, 10, 9))
T2 = np.ones((100, 3, 10))
print T2.dot(T1).shape
Ouput of attempt 1 :
(100, 3, 100, 9)
Which means it tried all possible combinations ... which is not what I am after.
Actually non of the other attempts even compile. I tried using np.tensordot , np.einsum (read here https://jameshensman.wordpress.com/2010/06/14/multiple-matrix-multiplication-in-numpy that it is supposed to do the job but I did not get Einsteins indices correct) also in the same link there is some crazy tensor cube reshaping method that I did not manage to visualize. Any suggestions / ideas-explanations on how to tackle this ?
Did you try?
In [96]: np.einsum('ijk,ilj->ilk',T1,T2).shape
Out[96]: (100, 3, 9)
The way I figure this out is look at the shapes:
(100, 10, 9)) (i, j, k)
(100, 3, 10) (i, l, j)
-------------
(100, 3, 9) (i, l, k)
the two j sum and cancel out. The others carry to the output.
For 4d arrays, with dimensions like (100,3,2,24 ) there are several options:
Reshape to 3d, T1.reshape(300,2,24), and after reshape back R.reshape(100,3,...). Reshape is virtually costless, and a good numpy tool.
Add an index to einsum: np.einsum('hijk,hilj->hilk',T1,T2), just a parallel usage to that of i.
Or use elipsis: np.einsum('...jk,...lj->...lk',T1,T2). This expression works with 3d, 4d, and up.