Let tensor T has shape [B, N, N, 6] and I want to multiply matrices [b, N, N, 0:3] by [b, N, N, 5] element-wise for each b in range(B). Note, that [N, N, 4] should not be changed. What is the best way to do this using tensorflow?
My attempts:
result = tf.empty([B, N, N, 5])
for b in range(B):
for i in range(4)
result[b, :, :, i] = tf.mul(T[b, :, :, i], T[b, :, :, 5])
result[b, :, :, 4] = T[b, :, :, 4]
In TensorFlow, it's not generally possible to build a tensor value by assigning to slices. The programming model tends to be more functional than imperative. One way of implementing your calculation is as follows:
result = tf.concat(3, [tf.mul(T[:, :, :, 0:4], T[:, :, :, 5:6]), T[:, :, :, 4:5]])
Note that you don't need multiple multiplications, because (i) the original computation is already element-wise on the 0th dimension (for b in range(B)), and (ii) TensorFlow will broadcast the second argument to the multiplication in the 3rd dimension.
Related
I have a 2D array x of shape (48, 7), and a 4D array T of shape (48, 7, 48, 7). When I multiply x * T, python broadcasts the dimensions, but not in the way I expected (actually, I donĀ“t understand how it is broadcasting). The following loop would achieve what I want:
for i in range(48):
for j in range(7):
Tx[i, j, :, :] = x[i, j] * T[i, j, :, :]
Where Tx is an array of shape (48, 7, 48, 7). My question is, is there a way to achieve the same result using broadcasting?
Broadcasting aligns trailing dimensions. In other words, x * Tx is doing this:
for i in range(48):
for j in range(7):
Tx[:, :, i, j] = x[i, j] * T[:, :, i, j]
To get the leading dimensions to line up, add unit dimensions to x:
Tx = x[..., None, None] * T
Alternatively, you can use np.einsum to specify the dimensions explicitly:
Tx = np.einsum('ij,ij...->ij...', x, T)
I found the solution.
Python broadcasts from the rightmost dimension and works its way to the left (source).
By transposing the first two dimensions and the last two dimensions:
T = np.transpose(T, (2,3,0,1))
It will then broadcast the way I expected. After that, the resulting array can be transposed again to recover the original shape:
Tx = x*T
Tx = np.transpose(Tx, (2,3,0,1))
I have a tensor of shape [None,2,7], a dummy shape for better understanding. I need to get the below numpy functionality in tensorflow.
arr = np.array([[[1, 2, 3,4,5,6,7], [4, 5, 6,1,2,3,4]], [[1, 2, 3,7,6,5,4], [4, 5, 6,4,3,2,1]]])
#in numpy
x[:, :, -3:] = x[:, :, :3] - \
x[:, :, :3].sum(axis=1, keepdims=True)/num #num shape is [None,1,1]
I need to do the above operation in tensorflow. But tensorflow does not support slicing operation on placeholders.
In my case x's None is depending upon other operations. If it is input placeholder, It would have been easy.
Any workaround or help for this problem
Thanks in advance
Do the operation that you need and concatenate with the rest of old tensor.
sliced = x[:, :, :3] - tf.reduce_sum(x[:, :, :3], axis=1, keepdims=True)
new_tensor = tf.concat([x[:,:,:-3], sliced], axis=-1)
If you need the same name afterwards just
old_tensor = new_tensor
I've two tensors of shape a(16,8,8,64) and b(64,64). Suppose, I extract last dimension of ainto another column vector c, I want to compute matmul(matmul(c.T, b), c). I want this to be done in each of the first 3 dimensions of a. That is the final product should be of shape (16,8,8,1). How can I achieve this in pytorch?
Can be done as follows:
row_vec = a[:, :, :, None, :].float()
col_vec = a[:, :, :, :, None].float()
b = (b[None, None, None, :, :]).float()
prod = torch.matmul(torch.matmul(row_vec, b), col_vec)
This is pretty much the same question as here Matrix/Tensor Triple Product? , but for theano.
So I have three matrices A, B, C of sizes n*r, m*r, l*r, and I want to compute the 3D tensor of shape (n,m,l) resulting from the triple (trilinear) product:
X[i,j,k] = \sum_a A[i,a] B[j,a] C[k,a]
A, B and C are shared variables:
A = theano.shared(numpy.random.randn(n,r))
B = theano.shared(numpy.random.randn(m,r))
C = theano.shared(numpy.random.randn(l,r))
I'd like to write it with a single theano expression, is there a way to do so?
If there are many, which one is the fastest?
np.einsum('nr,mr,lr->nml', A, B, C)
is equivalent to
np.dot(A[:, None, :] * B[None, :, :], C.T)
which can be implemented in Theano as
theano.dot(A[:, None, :] * B[None, :, :], C.T)
I've got a numpy array 'image' that is a two dimensional array where each element has two components. I want to convert this to another two dimensional array where each element has three components. The first two and a third one calculated from the first two, like so:
for x in range(0, width):
for y in range(0, height):
horizontal, vertical = image[y, x]
annotated_image[y, x] = (horizontal, vertical, int(abs(horizontal) > 1.0 or abs(vertical) > 1.0))
This loop works as expected, but is very slow when compared to other numpy functions. For a medium-sized image this takes an unacceptable 30 seconds.
Is there a different way to do the same calculation but faster? The original image array does not have to be preserved.
You could just separate the components of the image and work with multiple images instead:
image_component1 = image[:, :, 0]
image_component2 = image[:, :, 1]
result = (np.abs(image_component1) > 1.) | (np.abs(image_component2) > 1.)
If you for some reason need the layout you specified you could as well construct another three dimensional image:
result = np.empty([image.shape[0], image.shape[1], 3], dtype=image.dtype)
result[:, :, 0] = image[:, :, 0]
result[:, :, 1] = image[:, :, 1]
result[:, :, 2] = (np.abs(image[:, :, 0]) > 1.) | (np.abs(image[:, :, 1]) > 1.)